arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
1503.01081
\section{Introduction} Induction of transcription through extracellular signalling can yield rapid changes in gene expression for many genes. Establishing the timing of events during this process is important for understanding the rate-limiting mechanisms regulating the response and vital for inferring causality of regulatory events. Several processes influence the patterns of mRNA abundance observed in the cell, including the kinetics of transcriptional initiation, elongation, splicing and mRNA degradation. It was recently demonstrated that significant delays due to the kinetics of splicing can be an important factor in a focussed study of genes induced by Tumor Necrosis Factor (TNF-$\alpha$)~\cite{Hao2013}. Delayed transcription can play an important functional role in the cell; for example, inducing oscillations within negative feedback loops~\cite{Monk2003} or facilitating "just-in-time" transcriptional programmes with optimal efficiency~\cite{Zaslaver2004}. It is therefore important to identify such delays and to better understand how they are regulated. In this contribution we combine RNA polymerase (pol-II) ChIP-Seq data with RNA-Seq data to study transcription kinetics of estrogen receptor signalling in breast cancer cells. Using an unbiased genome-wide modelling approach we find evidence for large delays in mRNA production in 11\% of the genes with a quantifiable signal in our data. A statistical analysis of genes exhibiting large delays indicates that splicing kinetics is a significant factor and can be the rate-limiting step for gene induction. A high-throughput sequencing approach is attractive as it gives broad coverage and thus allows us to uncover the typical properties of the system. However, high-throughput data are associated with significant sources of noise and the temporal resolution of our data is necessarily reduced compared to previous studies using more focussed PCR-based assays~\cite{Hao2013, Zeisel2011}. We have therefore developed a statistically efficient model-based approach for estimating the kinetic parameters of interest. We use Bayesian estimation to provide a principled assessment of the uncertainty in our inferred model parameters. Our model can be applied to all genes with sufficiently strong signal in both the mRNA and pol-II data with only mild restrictions on the shape of the transcriptional activation profile (1814 genes here). A number of other works studying transcription and splicing dynamics (e.g.~\cite{Khodor2012,Pandya-Jones2013,Hao2013}) forgo detailed dynamical modelling, which limits their ability to properly account for varying mRNA half-lives. Our statistical model incorporates a linear ordinary differential equation of transcription dynamics, including mRNA degradation. Similar linear differential equation models have been proposed as models of mRNA dynamics previously~\cite{Rabani2011,Zeisel2011,Martelot2012}, but assuming a specific parametric form for the transcriptional activity. In contrast, we apply a non-parametric Gaussian process framework that can accommodate a quite general shape of transcriptional activity. As demonstrated previously~\cite{Lawrence2007,Gao2008,Honkela2010}, the linearity of the differential equation allows efficient exact Bayesian inference of the transcriptional activity function. Before presenting our results we outline our modelling approach. \section{Model-based inference of transcriptional delays} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{RNA_processing_cartoon_2col} \caption{A cartoon illustrating the underlying biology and data gathering at a single time point (left) and time series modelling (right). The data come from pol-II ChIP-seq, summarised over the last 20\% of the gene body, and RNA-seq computationally split to pre-mRNA and different mRNA transcript expression levels. The modelling on the right shows the effect of changing mRNA half-life ($t_{1/2}$) or RNA production delay ($\Delta$) on the model response: both induce a delay on the mRNA peak relative to the pol-II peak, but the profiles have otherwise distinct shapes.} \label{fig:cartoon} \end{figure*} Our modelling approach is summarised in Fig.~\ref{fig:cartoon}. We model the dynamics of transcription using a linear differential equation, \begin{equation} \frac{\mathrm{d}m(t)}{\mathrm{d}t} = \beta p(t-\Delta) - \alpha m(t) \ , \label{eqn:model} \end{equation} where $m(t)$ is the mature mRNA abundance and $p(t)$ is the transcription rate at the 3' end of the gene at time $t$ which is scaled by a parameter $\beta$ since we do not know the scale of our $p(t)$ estimates. The parameter $\Delta$ captures the delay between transcription completion and mature mRNA production. We refer to this as the RNA production delay, defined as the time required for the polymerase to disengage from the pre-mRNA and be fully processed into a mature transcript. The parameter $\alpha$ is the mRNA degradation rate which determines the mRNA half-life ($t_{\tiny 1/2} = \ln 2/\alpha$). We infer all model parameters ($\alpha$, $\beta$, $\Delta$, the noise variance and parameters of the Gaussian process covariance function discussed below) using a Markov chain Monte Carlo (MCMC) procedure. The posterior distribution of the model parameters quantifies our uncertainty and we use percentiles of the posterior distribution when reporting credible regions around the mean or median values. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{pol2_example_fits} \caption{Left: Heat map of inferred pol-II and mRNA activity profiles after MCF7 cells are stimulated with estradiol. Genes with sufficient signal for modelling are sorted by the time of peak pol-II activity in the fitted model. Right: Examples of fitted model for six genes. For each gene, we show the fit using the pol-II ChIP-Seq data (collected from the final 20\% of the transcribed region) representing the transcriptional activity $p(t)$ (see Eqn.~\eqref{eqn:model}), and using the RNA-seq data to represent gene expression $m(t)$. Solid red/green lines show the mean model estimates for the pol-II/mRNA profiles respectively with associated credible regions. In each case we show the posterior distribution for the inferred delay parameter $\Delta$ to the right of the temporal profiles. Note that the final measurement times are very far apart (the $x$-axis is compressed to aid visualisation) leading to high uncertainty in the model fit at late times. However, this does not significantly affect the inference of delays for early induced genes. \label{fig:models1} } \end{figure*} We measure the transcriptional activity $p(t)$ using RNA polymerase (pol-II) ChIP-Seq time course data collected close to the 3' end of the gene (reads lying in the last 20\% of the transcribed region). Our main assumption is that pol-II abundance at the 3' end of the gene is proportional to the production rate of mature mRNA after a possible delay $\Delta$ due to disengaging from the polymerase and processing. The mRNA abundance is measured using RNA-Seq reads mapping to annotated transcripts, taking all annotated transcripts into account and resolving mapping ambiguities using a probabilistic method~\cite{Glaus2012} (see Methods Section for details). As we limit our analysis to pol-II data collected from the 3'-end of the transcribed region, we do not expect a significant contribution to $\Delta$ from transcriptional delays when fitting the model. Such transcriptional delays have recently been studied by modelling transcript elongation dynamics using pol-II ChIP-Seq time course data~\cite{waMaina2014} and nascent mRNA (GRO-Seq) data~\cite{Danko2013} in the same system. Here we instead focus on production delays that can occur after elongation is essentially complete. Existing approaches to fitting models of this type have assumed a parametric form for the activation function $p(t)$~\cite{Rabani2011,Zeisel2011,Martelot2012}. We avoid restricting the function shape by using a non-parametric Bayesian procedure for fitting $p(t)$. We model $p(t)$ as a function drawn from a Gaussian process which is a distribution over functions. The general properties of functions drawn from a Gaussian process prior are determined by a {\em covariance function} which can be used to specify features such as smoothness and stationarity. We choose a covariance function that ensures $p(t)$ is a smooth function of time since our data are averaged across a cell population. Our choice of covariance function is non-stationary and has the property that the function has some persistence and therefore tends to stay at the same level between observations (see Supplementary Material for further details). The advantage of using a non-parametric approach is that we only have to estimate a small number of parameters defining the covariance function (two in this case, defining the amplitude and time-scale of the function). If we were to represent $p(t)$ as a parametrised function we would have to estimate a larger number of parameters to describe the function with sufficient flexibility. The Bayesian inference procedure we use to associate each estimated parameter with a credible region would be more challenging with the inclusion of these additional parameters. We have previously shown how to perform inference over differential equations driven by functions modelled using Gaussian processes~\cite{Lawrence2007,Gao2008,Honkela2010}. The main methodological novelty in the current work is the inclusion of the delay term in equation~\eqref{eqn:model} and the development of a Bayesian inference scheme for this and other model parameters. In brief, we cast the problem as Bayesian inference with a Gaussian process prior distribution over $p(t)$ that can be integrated out to obtain the data likelihood under the model in Eqn.~\eqref{eqn:model} assuming Gaussian observation noise. This likelihood function and its gradient are used for inference with a Hamiltonian MCMC algorithm~\cite{Duane1987} to obtain a posterior distribution over all model parameters and the full pol-II and mRNA functions $p(t)$ and $m(t)$. \section{Results} We model the transcriptional response of MCF-7 breast cancer cells after stimulation by estradiol to activate estrogen receptor (ER--$\alpha$) signalling. Fig.~\ref{fig:models1} shows the inferred pol-II and mRNA profiles for all genes with sufficient signal for modelling, along with some specific examples of fitted models and estimated delay parameters. Before discussing these results further below, we describe the application of our method to realistic simulated data to assess the reliability of our approach for parameter estimation under a range of conditions. \subsection{Simulated data} We applied our method to data simulated from the model in Eqn.~\eqref{eqn:model} using a $p(t)$ profile inferred using pol-II data from the TIPARP gene (gene c in Fig.~\ref{fig:models1}; see Supplementary Material for further details over the simulated data). We simulated data using different values of $\alpha$ and $\Delta$ to test whether we can accurately infer the delay parameter $\Delta$. Fig.~\ref{fig:synthetic} shows the credible regions of $\Delta$ for different ground truth levels (horizontal lines) and for different mRNA degradation rates (half-lives given on the $x$-axis). The results show that $\Delta$ can be confidently inferred with the ground truth always lying within the central part of the credible region. The maximum error in posterior median estimates is less than 10 min and when positive, the true value is always above the 25th percentile of the posterior. We observed that as the mRNA half-life increases, our confidence in the delay estimates is reduced. This is because the mRNA integrates the transcriptional activity over time proportional to the half-life leading to a more challenging inference problem. We also note that inference of the degradation parameter $\alpha$ is typically more difficult than inference of the delay parameter $\Delta$ (see Fig.~\ref{fig:synthetic_halflives}). However, a large uncertainty in the inferred degradation rate does not appear to adversely affect the inference of the delay parameters which are the main focus here. More time-points, or a different spacing of time points, would be needed to accurately infer the degradation rates. Additional results of delay estimation in a scenario where the simulated half-life changes during the time course are presented in Fig.~\ref{fig:synthetic_changing}. These results demonstrate that the obtained delay estimates are reliable even in this scenario. \begin{figure}[t] \centering \includegraphics{synthetic_delays} \caption{Boxplots of parameter posterior distributions illustrating parameter estimation performance on synthetic data for the delay parameter $\Delta$. The strong black lines indicate the ground truth used in data generation. The box extends from 25th to 75th percentile of the posterior distribution while the whiskers extend from 9th to 91st percentile. The results show that delay estimates are accurate and reliable, with the true value always in the high posterior density region. \label{fig:synthetic} } \end{figure} \subsection{Estrogen receptor signalling} We applied our method to RNA-Seq and pol-II ChIP-Seq measurements from MCF-7 cells stimulated with estradiol to activate ER-$\alpha$ signalling (see Methods section). The measurements were taken from cells extracted from the same population to ensure that time points are directly comparable across technologies. Example fits of our model are shown in Fig.~\ref{fig:models1}. The examples show a number of different types of behaviour ranging from early induced (a-c) to late induced (d-f), and from very short delay (a, d, e) to longer delays (b, c, f). Example (e), ECE1, is illuminating because visual inspection of the profiles suggests a possible delay, but a more likely explanation according to our model is a longer mRNA half-life and the posterior probability of a long delay is quite low. Indeed, it is well known that differences in stability can lead to delayed mRNA expression \cite{Hao2009} and therefore delays in mRNA expression peak relative to pol-II peak time are not sufficient to indicate a production delay. Changes in splicing can be another potential confounder, but our transcript-based analysis of RNA-seq data can account for that. An example of how more naive RNA-seq analysis could fail here is presented in Fig.~\ref{fig:osgin1_example}. The parameter estimates of the models reveal a sizeable set of genes with strong evidence of long delays between the end of transcription and production of mature mRNA. We were able to obtain good model fits for 1864 genes. We excluded 50 genes with posterior median delay >120 min, as these are unreliable due to sparse sampling late in the time course, which is apparent from broad delay posterior distributions. Out of the remaining 1814 genes with reliable estimates, 204 (11\%) had a posterior median delay larger than 20 min between pol-II activity and mRNA production while 98 genes had the 25th percentile of delay posterior larger than 20 min, indicating confident high delay estimates. A histogram of median delays is shown in Fig.~\ref{fig:delay_analysis} (left). The 120 min long delay cut-off was selected by visual observation of model fits which were generally reasonable for shorter delays. Note that late time points in our data set are highly separated due to the exponential time spacing used and thus the model displays high levels of uncertainty between these points (see Fig.~\ref{fig:models1}). Therefore genes displaying confident delay estimates are typically early-induced such that time points are sufficiently close for a confident inference of delay time. Our Bayesian framework makes it straightforward to establish the confidence of our parameter estimates. \subsection{Genomic features associated with long-delay genes} Motivated by previous studies~\cite{Khodor2012, Pandya-Jones2013, Bentley2014} we investigated statistical association between the observed RNA production delay and genomic features related to splicing. We found that genes with a short pre-mRNA (Fig.~\ref{fig:delay_tails}, left panel) are more likely to have long delays. We also find that genes where the ratio of the last intron's length in the longest annotated transcript over the total length of the transcript is large (Fig.~\ref{fig:delay_tails}, right panel) are also more likely to have long delays, but this effect appears to be weaker. These two genomic features, short pre-mRNA and relatively long last introns, are positively correlated, making it more difficult to separate their effects. To do so, Fig.~\ref{fig:delay_tails_lenfilter} shows versions of the right panel of Fig.~\ref{fig:delay_tails} but only including genes with pre-mRNAs longer than 10 kb or 30 kb. The number of genes with long last introns in these sets is smaller and the resulting $p$-values are thus less extreme, but the general shape of the curves is the same. We did not find a significant relationship with the absolute length of the last intron. This may be because the two observed effects would tend to cancel out in such cases. We also checked if exon skipping is associated with long delays as previously reported~\cite{Pandya-Jones2013}. The corresponding results in Fig.~\ref{fig:delay_tails_exonskip} show no significant difference in estimated delays in genes with and without annotated exon skipping. \begin{figure}[t] \centering \includegraphics{delay_analysis} \caption{Left: A histogram of delay posterior medians from 1864 genes found to fit the model well. Estimated delays larger than 120 min are considered unreliable and are grouped together. These 50 genes were excluded from further analysis, leaving 1814 genes for the main analysis. Right: Estimated gene transcriptional delay for the longest transcript plotted against the estimated posterior median RNA production delay. The transcriptional delay is estimated assuming each gene follows the median transcriptional velocity measured in Ref.~\cite{Danko2013}. The solid line corresponds to equal delays. \label{fig:delay_analysis}} \end{figure} \begin{figure}[t] \centering \includegraphics{delay_survival} \caption{Tail probabilities for delays. Left: genes whose longest pre-mRNA transcript is short ($m$ is the length from transcription start to end). Right: genes with relatively long last introns ($f$ is the ratio of the length of the last intron of the longest annotated transcript of the gene divided by the length of that transcript pre-mRNA). The fraction of genes with long delays $\Delta$ is shown by the red and blue lines (left-hand vertical axis). In both subplots, the black curve denotes the $p$-values of Fisher's exact test for equality of fractions depicted by the red and blue curves conducted separately at each point (right-hand vertical axis) with the dashed line denoting $p<0.05$ significance threshold. Similar plots for other values of $m$ and $f$ as well as different gene filter setups are given in Figs.~\ref{fig:delay_tails1}--\ref{fig:delay_tails2}. \label{fig:delay_tails}} \end{figure} \subsection{Analysis of the intronic read and pol-II distribution} We investigated whether there was evidence of differences in the pattern of splicing completion for long-delay genes. To quantify this effect, we developed a pre-mRNA end accumulation index: the ratio of intronic reads in the last 50\% of the pre-mRNA to the intronic reads in the first 50\% at late (80-320 min) and early (10-40 min) times. Fig.~\ref{fig:premrna_accumulation} shows that genes with a long estimated delay display an increase in late intron retention at the later times. There is a statistically significant difference in the medians of index values for short and long delay genes ($p < 0.01$, Wilcoxon's rank-sum test $p$-values for different short/long delay splits are shown in Fig.~\ref{fig:premrna_accumulation}). The example on the left of Fig.~\ref{fig:premrna_accumulation}, DLX3, is a relatively short gene of about 5 kb and thus differences over time cannot be explained by the time required for transcription to complete. The corresponding analysis for pol-II ChIP-seq reads as well as GRO-seq reads is in Fig.~\ref{fig:pol2_accumulation}. It shows a clear delay-associated accumulation to the last 5\% nearest to the 3' end, while for pol-II in the last 50\% the accumulation is universal. These results suggest our short delay genes tend to be efficiently spliced while long delay genes are more likely to exhibit delayed splicing towards the 3' end. There is also evidence of some accumulation of pol-II near the 3' end although the effect appears relatively weak. We note that Grosso \emph{et al.}~\cite{Grosso2012} identified genes with elevated pol-II at the 3'-end which were found to be predominantly short, consistent with our set of delayed genes, and with nucleosome occupancy consistent with pausing at the 3' end. \begin{figure}[t] \centering \includegraphics{premrna_halfdiff_composite} \caption{Left: We show the density of RNA-Seq reads uniquely mapping to the introns in the DLX3 gene, summarised in 200 bp bins. The gene region is defined from first annotated transcription start until the end of last intronic read. The ratio of the number of intronic reads after and before the midpoint of the gene region is used to quantify the 3' retention of introns. The pre-mRNA end accumulation index is the difference between averages of this ratio computed over late times (80-320 min) and early times (10-40 min). Right: Differences in the mean pre-mRNA accumulation index (left-hand vertical axis) in long delay genes (blue) and short delay genes (red) as a function of the cut-off used to distinguish the two groups (horizontal axis). Positive values indicate an increase in 3' intron reads over time. The black line shows the $p$-values of Wilcoxon's rank sum test between the two groups at each cut-off (right-hand vertical axis). \label{fig:premrna_accumulation}} \end{figure} \subsection{Relative importance of production and elongation delays} To better understand what are the rate-limiting steps in transcription dynamics, we assessed the relative importance of the observed RNA production delays in comparison to transcriptional delays due to elongation time. We estimated elongation times for each gene using assumed transcriptional velocity corresponding to the 2.1 kb/min median estimate from~\cite{Danko2013} combined with the length of the longest annotated pre-mRNA transcript. Others (e.g.~\cite{waMaina2014}) have reported higher velocities, so this approach should provide reasonable upper bounds on actual elongation time for most genes. A comparison of these delays with our posterior median delay estimates is shown in Fig.~\ref{fig:delay_analysis} (right). The figure shows the majority of genes with short production delays and moderate elongation time in the top-left corner of the figure, but 14.3\% (260/1814) of genes have a longer RNA production delay than elongation time. \section{Discussion} Through model-based coupled analysis of pol-II and mRNA time course data we uncovered the processes shaping mRNA expression changes in response to estrogen receptor signalling. We find that a large number of genes exhibit significant production delays. We also find that delays are associated with short overall gene length, relatively long final intron length and increasing late-intron retention over time. Our results support a major role for splicing-associated delays in shaping the timing of gene expression in this system. Our study complements the discovery of similarly large splicing-associated delays in a more focussed study of TNF-induced expression~\cite{Hao2013} indicating that splicing delays are likely to be important determinants of expression dynamics across a range of signalling pathways. It is known that splicing can strongly influence the kinetics of transcription. Khodor {\em et al.} carried out a comparative study of splicing efficiency in fly and mouse and found a positive correlation between absolute gene length and splicing efficiency \cite{Khodor2012}. This suggests that efficient co-transcriptional splicing is facilitated by increased gene length and is consistent with our observation that delays are more common in shorter genes. In these genes it appears that the mature mRNA cannot be produced after transcription until splicing is completed; it is splicing rather than transcription that is the rate-limiting step for these genes. In the same study it was also observed that introns close to the 3'-end of a gene are less efficiently spliced which is consistent with our observation that the relative length of the final intron may impact on splicing delays. A further theoretical model supporting a link between long final introns and splicing inefficiency was recently suggested in Ref.~\cite{Catania2013}, but it is unclear if it can fully explain the observed relationships. Our model assumes a constant mRNA degradation rate which may be unrealistic. Given the difficulty of estimating even a single constant degradation rate for simulated data where the true rate is constant, it seems infeasible to infer time-varying rates with the current data. On the other hand, estimated delays were quite reliably inferred even when we simulated data with a time-varying degradation rate (Fig.~\ref{fig:synthetic_changing}), and hence the potentially incorrect degradation model should not affect the main results significantly. It is important to differentiate the delays found here with transcriptional delays required for pol-II elongation to complete. Elongation time can be a significant factor in determining the timing of gene induction and elongation dynamics has been modelled using both pol-II ChIP-Seq~\cite{waMaina2014} and nascent RNA (GRO-Seq)~\cite{Danko2013} time course measurements in the system considered here. However, in this study we limited our attention to pol-II data at the 3'--end of the gene, i.e.\ measuring polymerase density changes in the region where elongation is almost completed. Therefore, we will not see transcription delays in our data and the splicing-associated delays discussed above are not related to elongation time. Indeed, the splicing-associated delays observed here are more likely to affect shorter genes where transcription completes rapidly. These splicing-associated delays are much harder to predict from genomic features than transcriptional delays, which are mainly determined by gene length, although we have shown an association with final intron length and gene length. In the future it would be informative to model data from other systems to establish associations with system-specific variables (e.g.\ alternative splice-site usage) and thereby uncover context-specific mechanisms regulating the delays that we have observed here. \subsection{Availability} Raw data are available at GEO (accession GSE62789). A browser of all model fits and delay estimates is available at \texttt{http://ahonkela.users.cs.helsinki.fi/pol2rna/}. Code for reproducing the experiments is available at\\ \texttt{https://github.com/ahonkela/pol2rna}. \section{Methods} \subsection{Data acquisition and mapping} MCF-7 breast cancer cells were stimulated with estradiol (E2) after being placed in estradiol free media for three days, similarly as previously described~\cite{waMaina2014}. We measured pol-II occupancy and mRNA concentration from the same cell population collected at 10 time points on a logarithmic scale: 0, 5, 10, 20, 40, 80, 160, 320, 640 and 1280 min after E2 stimulation. At each time point, the pol-II occupancy was measured genome-wide by ChIP-seq and mRNA concentration using RNA-Seq. Raw reads from the ChIP-Seq data were mapped onto the human genome reference sequence (NCBI\_build37) using the Genomatix Mining Station (software version 3.5.2; further details in Supplementary Material). On average 84.0\% of the ChIP-Seq reads were mapped uniquely to the genome. The RNA-seq reads were mapped using bowtie to a transcriptome constructed from Ensembl version 68 annotation allowing at most 3 mismatches and ignoring reads with more than 100 alignments. The transcriptome was formed by combining the cDNA and ncRNA transcriptomes with pre-mRNA sequences containing the full genomic sequence from the beginning of the first annotated exon to the end of the last annotated exon. On average 84.7\% of the RNA-seq reads were mapped. \subsection{RNA-seq data processing} mRNA concentration was estimated from RNA-seq read data using BitSeq \cite{Glaus2012}. BitSeq is a probabilistic method to infer transcript expression from RNA-seq data after mapping to an annotated transcriptome. We estimated expression levels to all entries in the transcriptome, including the pre-mRNA transcripts, and used the sum of the mRNA transcript expressions in FPKM units to estimate the mRNA expression level of a gene. Different time points of the RNA-seq time series were normalised using the method of \cite{Anders2010}. \subsection{Pol-II ChIP-seq data processing} The ChIP-seq data were processed into time series summarising the pol-II occupancy at each time point for each human gene. We considered the last 20\% of the gene body nearest to the 3'-end. The gene body was defined from the start of the first exon to the end of the last exon in Ensembl 68 annotation. The data were subject to background removal and normalisation of time points. (Full details in the Supplementary Material.) \subsection{Filtering of active genes} We removed genes with no clear time-dependent activity by fitting time-dependent Gaussian process models to the activity curves and only keeping genes with Bayes factor at least 3 in favour of the time-dependent model compared to a null model with no time dependence. We also removed genes that had no pol-II observations at 2 or more time points. This left 4420 genes for which we fitted the models. \subsection{Modelling and parameter estimation} We model the relationship between pol-II occupancy and mRNA concentration using the differential equation in Eqn.~\eqref{eqn:model} which relates the pol-II time series $p(t)$ and corresponding mRNA time series $m(t)$ for each gene. We model $p(t)$ in a nonparametric fashion by applying a Gaussian Process (GP) prior over the shapes of the functions. We sightly modify the model in Eqn.~\eqref{eqn:model} by adding a constant $\beta_0$ to account for the limited depth of pol-II ChIP-Seq measurements, \begin{equation} \frac{\mathrm{d} m(t)}{\mathrm{d} t} = \beta_0 + \beta p(t-\Delta) -\alpha m(t) \label{eq:differential_equation} \end{equation} This differential equation can be solved for $m(t)$ as a function of $p(t)$ in closed form. The pol-II concentration function $p(t)$ is represented as a sample from a GP prior which can be integrated out to compute the data likelihood. The model can be seen as an extension of a previous model applied to transcription factor target identification \cite{Honkela2010}. Unlike Ref.~\cite{Honkela2010}, we model $p(t)$ as a GP defined as an integral of a function having a GP prior with RBF covariance, which implies that $p(t)$ tends to remain constant between observed data instead of reverting back to the mean. Additionally we introduce the delay between pol-II concentration and mRNA production as well as model the initial mRNA concentration as an independent parameter. In the special case where $\Delta=0$, and $m_0=\beta_0/\alpha$, Eqn.~\eqref{eq:mrna_integral_appendix} reduces to the previous model (Eqn.\ 4 in \cite{Honkela2010}). In order to fit the model to pol-II and mRNA time course data sampled at discrete times, we assume we observe $m(t)$ and $p(t)$ corrupted by zero-mean Gaussian noise independently sampled for each time point. We assume the pol-II noise variance is a constant $\sigma_{p}^2$ and infer it as a parameter of the model. The mRNA noise variances for each time point are sums of a shared constant $\sigma_{m}^2$ and a fixed variance inferred by BitSeq by combining the technical quantification uncertainty from BitSeq expression estimation with an estimate of biological variance from the BitSeq differential expression model (full details in Supplementary Material). Given the differential equation parameters, GP inference yields a full posterior distribution over the shape of the Pol-II and mRNA functions $p(t)$ and $m(t)$. We infer the differential equation parameters from the data using MCMC sampling which allows us to assign a level of uncertainty to our parameter estimates. To infer a full posterior over the differential equation parameters $\beta_0$, $\beta$, $\alpha$, $\Delta$, $m_0$, $E[p_0]=\mu_p$, the observation model parameters $\sigma_{p}^2$, $\sigma_{m}^2$, and a magnitude parameter $C_p$ and width parameter $l$ of the GP prior, we set near-flat priors for them over reasonable value ranges, except for the delay $\Delta$ whose prior is biased toward 0 (exact ranges and full details are presented in Supplementary Material). We combine these priors with the likelihood obtained from the GP model after marginalising out $p(t)$ and $m(t)$, which can be performed analytically. We infer the posterior over the parameters by Hamiltonian MCMC sampling. This full MCMC approach utilises gradients of the distributions for efficient sampling and rigorously takes uncertainty over differential equation parameters into account. Thus the final posterior accounts for both the uncertainty about differential equation parameters, and uncertainty over the underlying functions for each differential equation. We ran 4 parallel chains starting from different random initial states for convergence checking using the potential scale reduction factor of~\cite{Gelman1992}. We obtained 500 samples from each of the 4 chains after discarding the first half of the samples as burn-in and thinning by a factor of 10. Posterior distributions over the functions $p(t)$ and $m(t)$ are obtained by sampling 500 realisations of $p(t)$ and $m(t)$ for each parameter sample from the exact Gaussian conditional posterior given the parameters in the sample. The resulting posteriors for $p(t)$ and $m(t)$ are non-Gaussian, and are summarised by posterior mean and posterior quantiles. Full details of the MCMC procedure are in Supplementary Material. \subsection{Filtering of results} Genes satisfying the following conditions were kept for full analysis. (Full implementation details of each step are in Supplementary Material.) \begin{enumerate} \item $p(t)$ has the maximal peak in the densely sampled region between 1 min and 160 min. \item Estimated posterior median delay is less than 120 min. \item $p(t)$ does not change too much before $t=0 \text{ min}$ to match the known start in steady state. \end{enumerate} \subsection{Analysis of the gene annotation features associated with the delays} Ensembl version 68 annotations were used to derive features of all genes. For each annotated transcript, we computed the total pre-mRNA length $m$ as the distance from the start of the first exon to the end of the last exon, and the lengths of all the introns. Transcripts consisting only of a single exon (and hence no introns) were excluded from further analysis. For each gene, we identified the transcript with the longest pre-mRNA and used that as the representative transcript for that gene. The last intron share $f$ was defined as the length of the last intron of the longest transcript divided by $m$. \subsection{Pre-mRNA end accumulation index} For this analysis, we only considered reads aligning uniquely to pre-mRNA transcripts and not to any mRNA transcripts. We counted the overlap of reads with 200 bp bins starting from the beginning of the first exon of each gene ending with the last non-empty bin. We compute the fraction $r_{e,i}$ of all reads in the latter half of bins in each sample $i$, and define the index as the difference of the means of $r_{e,i}$ over late time points (80-320 min) and over early time points (10-40 min). \section*{Acknowledgments} The work was funded by the European ERASysBio+ initiative project ``Systems approach to gene regulation biology through nuclear receptors'' (SYNERGY) by the BBSRC (BB/I004769/2 to JP, MR and NDL), Academy of Finland (135311 to AH and HT) and by the BMBF (grant award ERASysBio+ P\#134 to GR; grant no. 0315715B to KG). MR, NDL and KG were further supported by EU FP7 project RADIANT (grant no. 305626), and AH and JP by the Academy of Finland (grant nos. 252845, 259440, 251170). \clearpage
1503.01334
\section{Introduction} Quantum random walks exhibit features that can be significantly different to their classical counterparts. As a famous example, the hitting time, which is a fundamental quantity in the theory of random walks, can be exponentially reduced if so-called coined quantum walks are employed \cite{2003_Kempe_Contemp_Phys}. However, such strong results are only known to hold for a few special classes of undirected random walks. Alternative approaches to quantization of random walks over more general graphs, in which case we talk about Markov chains (MCs), most often aim at more modest polynomial improvements. Using the so-called Szegedy-type quantum walks, a generic quadratic improvement in hitting times \cite{2010_Krovi} was shown for all time-reversible MCs\footnote{The guaranteed quadratic improvement is shown, provided only one target element exists.}. The generality of the setting, while preventing superpolynomial speedups, compensates with its greater applicability. Early on, related approaches have e.g. provided a basis for a quadratic improvement of algorithms for element distinctness \cite{2004_Ambainis}, element detection \cite{2004_Szegedy_IEEE} and the triangle problem \cite{2005_Magniez}. Setting aside hitting times, quantum walks have been investigated for their capacity to speed up mixing processes, that is, the task of preparing stationary distributions of a given MC. This task constitutes another fundamental problem of Markov chain theory. Efficient mixing is, for instance, important in the context of Markov Chain Monte Carlo (MCMC) algorithms. MCMC methods are, for instance, central to many algorithmic solutions to hard combinatorial problems and problems stemming from statistical physics \cite{1999_Newman}. Quantum improvements in this context have already been reported \cite{2007_Richter, 2007_Richter_NJP, 2008_Wocjan, 2008_Somma_PRL, 2012_Yung, 2011_Temme_Nature}. Beyond MCMC-related applications, efficient mixing also extends the applicability of the aforementioned quantum hitting time speedups, as the preparation of the relevant stationary distributions is sometimes assumed to be an affordable primitive \cite{2011_Magniez_SIAM,2010_Krovi}. However, despite the considerable interest, the quantum speedup of mixing processes has only been shown for certain classes of MCs \cite{2000_Nayak,2001_Aharonov, 2001_Ambainis,2002_Moore,2007_Richter}, and it is an open conjecture that a generic quadratic speedup for mixing can be obtained for all time-reversible MCs \cite{2007_Richter}. For a recent review on quantum walks see \emph{e.g.} \cite{2011_Reitzner}. In this work we consider the problem of sequentially generating stationary distributions of sequences of slowly evolving Markov chains, illustrated in Fig \ref{Fig1}b. \begin{figure} \label{Fig1} \includegraphics[trim=0 20 0 0, clip=true, width=0.76\textwidth]{Fig1.pdf} \caption{ Standard simulated annealing is presented in part a) of the figure: at each time-step $k,$ we produce a sample from the distribution $\bar{\pi}_k$ which need not be exactly the stationary distribution of the Markov chain MC$_k$. This is used as the initial distribution for the next Markov chain. However, the last sample is distributed (approximately) according to $\pi_t$ which is the stationary distribution of MC$_t$ and the target distribution. Part b) of the figure represents our setting: the sequential sampling from a slowly evolving sequence of Markov chains. At each time-step $k,$ we are required to produce an element sampled from $\pi_k,$ which is a good approximation of the stationary distribution of the Markov chain MC$_k$. The sequence need not terminate, or it may be arbitrarily long. } \end{figure} This setting is similar to the scenario of simulated annealing, in which case quantum improvements have already been achieved \cite{2008_Wocjan, 2008_Somma_PRL, 2009_Wocjan}. There is, however, a key distinction between the annealing settings and ours: in annealing settings, the target is to produce a sample from the stationary distribution of the final chain only, whereas the intermediary chains have only an accessory role. In contrast, in our case, we must produce samples sequentially, for each chain in the sequence (and, indeed, the sequence can in principle be infinite). The motivation for this problem stems from recent work in artificial intelligence (AI) \cite{2014_Paparo}, by the authors and other collaborators, but may have broader applicability. We will comment on this further later. For our problem, we first identify two classes of Markov chains, characterized by the distance of their stationary distribution from the uniform distribution. These two classes cover all discrete time-reversible Markov chains, and for both classes mixing can be achieved in time $O(\sqrt{\delta^{-1}} N^{1/4})$, neglecting logarithmic terms. The methods used for mixing differ for the two classes, and the second technique (utilized when the target distribution is, in a sense we specify later, \emph{far} from the uniform distribution) requires additional information about the underlying Markov chain. In particular, it requires a small number of samples from the very underlying stationary distribution we seek to construct. While this additional information cannot be straightforwardly recovered given just one MC, we show that in the context of slowly evolving Markov chains, it can. The structure of this paper is as follows. In Section \ref{Sect2} we present related work and clarify the distinction between our and previously studied settings. Following this, in Section \ref{Sect3} we cover the preliminaries and introduce all the (sub)-protocols required for our main result. Finally, in Section \ref{Sect4} we give our main result, and finish off with a brief discussion in Section \ref{Sect5}. \section{Related work} \label{Sect2} The setting of slowly evolving MCs is especially relevant in the pervasive simulated annealing methods. In MCMC methods in general, the task is to produce a sample from the stationary distribution of some target MC $P_T$. For concreteness, this can be the Gibbs distribution $\sigma_T$ of a physical system at a target (low) temperature $T$. Markov chains which have $\sigma_T$ as the stationary distribution are easy to construct, but, in general, the \emph{mixing time} required to achieve stationarity is prohibitive. Better results are often achieved by using simulated annealing methods, in which one constructs a sequence of MCs $P_1, \ldots, P_t =P_T$, which, for instance, encode the Gibbs distributions at gradually decreasing temperatures. The choice of the temperature-dependent sequence is often referred to as the annealing schedule. The fact that the temperatures decrease gradually ensures that the stationary distributions of neighboring chains are close, so the sequence is slowly evolving. As the temperature corresponding to $P_1$ is high, the stationary distribution of $P_1$ is essentially uniform, and $P_1$ mixes rapidly (effectively in one step). Simulated annealing is then realized by sequentially applying the chains $P_1$ to $P_t$ to the initial distribution. In this process, no individual chain fully mixes, but nonetheless, often the reached distribution approximates the target distribution well, even when the number of steps $t$ is substantially smaller than the mixing time of $P_T$ itself. Quantum variants (and generalizations) of the classical annealing approach have been previously addressed in, for instance, \cite{2008_Wocjan, 2008_Somma_PRL, 2009_Wocjan}. There, the so-called Szegedy walk operators are employed instead of the classical transition matrices $P_t$. The approaches differ, with one commonality: at each time-step, the quantum state obtained from the previous step is used in the subsequent step, and thus quantum coherence is maintained throughout the steps of the protocols. Our setting is inspired by a recent result by the authors and other collaborators where Szegedy-type quantum walks are used in problems of AI \cite{2014_Paparo}. In the so-called reflective Projective Simulation (rPS) model of artificial intelligence, at each time-step $t$, the target action of an rPS agent is encoded in the stationary distribution of a MC $P_t$ which is gradually modified as the agent learns through the interaction with the environment. The agent's action, which is chosen by sampling from this distribution, has to be output at each time-step. For more details on the Projective Simulation model for AI, we refer the reader to \cite{2014_Paparo,Julian12,2012_Briegel}. Viewed abstractly, in this setting we have an, in principle, infinite sequence (a stream) of MCs $P_1, P_2, \ldots, P_t, \ldots$ which is slowly evolving. At each time-step $t$, we are required to produce an element sampled according to the stationary distribution of $P_t$ \footnote{For completeness, in the rPS model the agent actually needs to produce a sample from a renormalized tail of the stationary distribution, which can have very low cumulative weight, making the process very slow. To resolve this problem, we have employed a quantum approach in \cite{2014_Paparo}.}. In contrast, in simulated annealing, the sequence is finite, and we are only required to produce a sampled element distributed according to the stationary distribution of the \emph{last} MC. The quantum approaches to simulated annealing cannot be straightforwardly applied to our setting, as this would require measuring the quantum state at each step. This would prevent all the quantum speedup. Alternatively, the sequence would have to be re-run from the beginning at each time-step, which is not acceptable as the sequence can be of arbitrary length. The differences between the two settings are illustrated in Figs. \ref{Fig1}a and \ref{Fig1}b. It is worthwhile noting that even the classical simulated annealing methods do not immediately help with our task. In classical simulated annealing, at each time step $t$ we are dealing with a classical sample (corresponding to step $t$) which can be copied. However, one cannot output the classical sample at time - step $t$, and use it as a seed for the next time-step: this would induce correlations between the samples at different time-steps whereas we require independent samples \cite{1997_Aldous}\footnote{This problem can be circumvented by letting each MC from the sequence fully mix. However, in this case we lose any advantage of simulated annealing, and just perform brute-force mixing at each time-step.}. \section{Preliminaries} \label{Sect3} In this section, we will set up the notation and define the basic tools we will employ throughout this paper. Part of the presentation is inspired, and closely follows, the approach given in \cite{2011_Magniez_SIAM}. The basic building block we will use in this work is the so-called Szegedy walk operator $W(P),$ defined for any ergodic, aperiodic and time-reversible Markov chain $P$. First, we will briefly recap a few basic notions regarding Markov chains for the convenience of the reader, and refer to \cite{1998_Norris} for further details. Throughout this paper, with $P$ we will denote a left-stochastic matrix (a matrix with non-negative, real entries which add up to one in every column). As $P,$ along with an initial distribution, specifies a Markov chain, we will refer to $P$ as the transition matrix and the Markov chain, interchangeably. If $P$ is irreducible and aperiodic, then there exists a unique stationary distribution $\pi,$ such that $P \pi = \pi$ \footnote{In this work we will adhere to the convention in which the transition matrices are left-stochastic, and act on column-vectors from the left.}. Here, $\pi$ denotes a distribution over the state space, represented as a non-negative column vector $\pi = (\pi_i)_{i=1}^{N},$ $\pi_i \in \mathbbmss{R}^{+}_0$, such that $\sum_i \pi_i =1.$ If $\pi$ is a distribution, then we will refer to the element which occurs with a largest probability $i_{max} = \textup{argmax}_{i} \pi_i$ as a mode of the distribution $\pi,$ and the corresponding largest probability $\pi_{max} = \textup{max}_{i} \pi_i$ as the probability of a mode. Note that while the mode need not be unique, the probability of the/a mode is. The final property we will require is that the Markov chain $P$ is time-reversible, that is, that it satisfies detailed balance: an ergodic Markov chain $P$ with stationary distribution $\pi$ is time-reversible if the following holds: \EQ{ \pi_i P_{ij} = \pi_j P_{ji}, \forall\ i,j. } More generally, for an ergodic Markov chain $P,$ over the state space of size $N$ with stationary distribution $\pi$, we define the time-reversed Markov chain $P^{\ast}$ with $P^{\ast} = D(\pi) P^{\tau} D(\pi)^{-1},$ where $D$ is the diagonal matrix $D=diag(\pi_1, \ldots, \pi_N).$\footnote{The inverse of $D$ always exists, as stationary distributions of irreducible aperiodic Markov chains have non-zero support over the entire state space.} Then, $P$ is time-reversible if $P=P^{\ast}.$ Next, we review the basics of so-called Szegedy-type quantum walks, to an extent inspired by the presentation given in \cite{2011_Magniez_SIAM} \subsection{The Szegedy walk operator} While the Szegedy walk operator $W(P)$ can be defined directly, it will be useful for us to construct it from a more basic building block, the diffusion operator $U_P.$ The diffusion operator $U_P$ acts on two quantum registers of $N$ states, (partially) defined as follows: \EQ{ U_{P} \ket{i}_{\textup{I}}\ket{0}_{\textup{II}} = \ket{i}_{\textup{I}} \sum_{j=1}^{N}\sqrt{P_{ji}} \ket{j}_{\textup{II}}.\label{UP} } The operator $U_P$ is a natural quantum analog of the operator $P$ in the sense that a classical random walk can be recovered by applying $U_P$, measuring the second register, re-setting the first register to $\ket{0}$, and swapping the registers. While $U_P$ is not uniquely defined, any operator satisfying Eq. (\ref{UP}) will do the job. The operator $U_P,$ and its adjoint are then used to construct the following operator: \EQ{ \textit{ref}(A) = U_P (\mathbbmss{1}_{\textup{I}} \otimes Z_{\textup{II}}) U_P^{\dagger}, } where $Z = 2 \ket{0}\bra{0} - \mathbbmss{1},$ reflects over the state $\ket{0}$. The operator $\textit{ref}(A)$ is itself a reflector, reflecting over the subspace $A = \textup{span}(\{ U_P \ket{i} \ket{0} \}_i)$. The Szegedy quantum walk is often explained as a bi-partite walk between two copies of the original graph, and $\textit{ref}(A)$ corresponds to one direction. The other direction is established by defining the diffusion operator in the opposite direction: $V_P = \textit{SWAP}_{\textup{I},\textup{II}}\, U_P\, \textit{SWAP}_{\textup{I},\textup{II}},$ and proceeding analogously as in the case for the set $A$, to generate the $\textit{ref}(B)$ operator, reflecting over $B = \textup{span}(\{ V_P \ket{0} \ket{j} \}_j)$. The Szegedy walk operator is then defined as $W(P) = \textit{ref}(B) \textit{ref}(A)$. In \cite{2004_Szegedy_IEEE,2011_Magniez_SIAM} it was shown that the operator $W(P)$ and $P$ are closely related, in particular in the case when $P$ is time-reversible. Often we will be referring to the \emph{coherent encoding} of a distribution $\pi,$ denoted $\ket{\pi}.$ The state $\ket{\pi}$ is a pure state of an $N-$level system given by $\ket{\pi} = \sum_{i=1}^{N} \sqrt{\pi_i} \ket{i}.$ It is clear that a computational basis measurement (so a projective measurement w.r.t. the basis $\{\ket{i} \}_i$) of the state $\ket{\pi}$ outputs an element distributed according to $\pi$. In the context of Szegedy-type quantum walks, it is convenient to define another type of a coherent encoding, relative to a Markov chain $P$, which we temporarily denote $\ket{\pi'}.$ This encoding is defined by $\ket{\pi'} =U_{P} \ket{\pi}_{\textup{I}} \otimes \ket{0}_{\textup{II}}$, where $U_P$ is the Szegedy diffusion operator. It is easy to see that $\ket{\pi}$ and $\ket{\pi'}$ are trivially related via the diffusion map (more precisely, the isometry $\ket{\pi} \rightarrow U_P \ket{\pi} \otimes \ket{0}$) and moreover that the computational measurement of the first register of $\ket{\pi'}$ also recovers the distribution $\pi$. Due to this, by abuse of notation, we shall refer to both encodings as \emph{the coherent encoding} of the distribution $\pi,$ and denote them both with $\ket{\pi},$ where the particular encoding will be clear from the context. However, for the majority of the text, we will be using the latter meaning. With these definitions in place we can further clarify the relationship between the classical transition operator $P$ and the Szegedy walk operator $W(P).$ Let $\pi$ be the stationary distribution of $P,$ so $P \pi = \pi$. Then the coherent encoding of the stationary distribution $\pi$ of $P$, given by $\ket{\pi} = U_P \sum_{i} \sqrt{\pi_i} \ket{i} \ket{0},$ is also a +1 eigenstate of $W(P)$, that is, $W(P)\ket{\pi} = \ket{\pi}$. Moreover, in the subspace $A+B$, so-called \emph{busy subspace}, it is the unique $+1$ eigenstate. On the orthogonal complement of the busy subspace, $W(P)$ acts as the identity. Moreover, the spectrum of $P$ and $W(P)$ is intimately related, and in particular the spectral gap \EQ{\delta = 1- \max_{{ \lambda \in \sigma(P) \atop \lambda \not= 1}} |\lambda|,} where $\lambda$ denote the eigenvalues of $P$ and $\sigma(P)$ denotes the spectrum of $P$, is essentially quadratically smaller than the phase gap \EQ{\Delta = \min\left\{ 2 | \theta| | e^{i \theta} \in \sigma\left(W\left(P\right)\right),\ \theta \not= 0\right\} , } where $\theta$ denote the arguments of the eigenvalues, i.e. eigenphases, of $W(P)$. This relationship is at the very basis of all speedup obtained from employing Szegedy-type quantum walks, which we shall elaborate further. In this paper we will not use other results than those we briefly exposed here, and we refer the interested reader to \cite{2011_Magniez_SIAM, 2010_Krovi} for further details. \subsection{$\ket{\pi}$ projective measurement} The first application of the walk operator $W(P)$ allows us to approximate a projective measurement on the $\ket{\pi}$ state, where $\pi$ is the stationary distribution of $P$. This is achieved by using Kitaev's phase detection algorithm\footnote{The original algorithm by Kitaev allowed the estimation of the eigenphases of a given operator, where the final step is an inverse quantum Fourier transform ($QFT^{\dagger}$) on the phase containing register. This algorithm can be, for our purposes, further simplified by substituting the $QFT^{\dagger}$ with the suitable number of Hadamard gates, as suggested in \cite{2008_Wocjan}. This substitution maintains the probability of observing a 'zero' phase, and the corresponding post-selected state, thus can be used to detect a non-zero phase. For this reason, this slightly tweaked algorithm is called the phase \emph{detection} algorithm.} \cite{1996_Kitaev} on $W(P)$ (with precision $ \tilde{O}(1/\sqrt{\delta})$), which, if followed by the measurement of the phase-containing register, approximates the projective measurement on the state $\ket{\pi}.$ To understand why this holds, recall that the $W(P)$ operator has the state $\ket{\pi}$ as the unique $+1$ eigenstate, in the busy subspace. Moreover the values of the phases of all other eigenstates (in the same subspace) are at least $\Delta$. Thus, provided the state we perform the measurement on is in $A + B$, the residual state, conditioned on detecting zero phase, is a good approximation of $\ket{\pi}.$ The error can be further suppressed by iterating the procedure, as was suggested in \cite{2011_Magniez_SIAM}, there for the purpose of approximate reflection, which we will elaborate on next. More precisely, the errors can be made exponentially small with linear overhead, yielding an overall cost $\tilde{O}(1/\sqrt{\delta})$. Here, the $\tilde{O}$, the so-called soft-O notation ignores the logarithmically contributing factors, in this case stemming from the quality of the approximation. This result can be seen as a consequence of Theorem 6 in \cite{2011_Magniez_SIAM}. This is a very useful tool for 'purifying' an already good approximation of the target state $\ket{\pi}.$ However, this projective measurement behaves correctly only if we are guaranteed the state we have is in the space $A + B.$ Fortunately, this is easy to achieve. In particular, testing whether a given state is in $A$ (or $B$) is straightforward: one simply applies $U_P^{\dagger}$ (or $V_P^\dagger$) and checks the contents of the second (or first) register. Provided we observe the state $\ket{0},$ we are guaranteed that we are in the correct subspace. Since the target state $\ket{\pi}$ is in $A,$ it will suffice to check whether the initial state is in $A$ first and if it is perform the $\ket{\pi}$ projective measurement. The sequence of these two measurement ($A$ membership measurement, followed by the phase measurement) constitutes the $\ket{\pi}$ projective measurement. The success probability of this measurement, applied on the pure state $\ket{\psi}$ is in $O (F(\ket{\psi},\ket{\pi})),$ that is on the order of the fidelity $F(\ket{\psi},\ket{\pi}) = |\bra{\psi}\pi \rangle |^2$ between the input state and the $\ket{\pi}$ state. Note that if the measurement were perfect, the success probability would be exactly the fidelity. \subsection{Approximate reflection over $\ket{\pi}$} One of the central tools in the theory of Szegedy-type quantum walk is the so-called Approximate Reflection Operator $ARO(P) \approx 2\dm{\pi} - \mathbbmss{1}$, which approximately reflects over the state $\ket{\pi}$ \cite{2011_Magniez_SIAM}. The basic idea for the construction of this operator is similar to the one we gave for the $\ket{\pi}$ projective measurement. By applying Kitaev's phase detection algorithm on $W(P)$ (with precision $O(\log(\Delta))$), applying a phase flip to all states with phase different from zero, and by undoing the phase detection algorithm, we obtain an arbitrarily good approximation of the reflection operator $R(P) = 2 \dm{\pi} - \mathbbmss{1}$, for any state within $A+B$. The errors of the approximation can be efficiently suppressed by iteration (by the same arguments as for the $\ket{\pi}$ measurement) \cite{2011_Magniez_SIAM}, so the cost of the approximate reflection operator is again in $\tilde{O}(1/\Delta) = \tilde{O}(1/\sqrt{\delta}).$ Thus, the second gadget in our toolbox is the operator $ARO(P),$ which approximates a perfect reflection $R(P)$ on $A+B$, while incurring a cost of $\tilde{O}(1/\sqrt{\delta})$ calls to the walk operator $W(P)$. The operator $ARO(P)$ is central to many of the results employing Szegedy-type walks \cite{2011_Magniez_SIAM, 2010_Krovi}, in particular in tasks of element finding, as we shall clarify next. \subsection{Element searching and unsearching} The approximate reflection operator $ARO(P)$, along with the capacity to flip the phase of a chosen subset of the computational basis elements, suffices for the implementation of an amplitude amplification \cite{2000_Brassard} algorithm. This, in turn, allows us to find the chosen elements with a quantum speed-up. To illustrate this, assume we are given the state $\ket{\pi},$ the (ideal) reflector $R(P),$ and assume we are interested in finding some set of elements $M \subseteq \{1, \ldots, N \}$. The subset $M$ is typically specified by an oracular access to a phase flip operator defined with $Z_M = \mathbbmss{1} - 2\sum_{i \in S} \dm{i}$. The element searching then reduces to iterated applications of $Z_M R(P)$ (which can be understood as a generalized Grover iteration, more precisely amplitude amplification) onto the initial state $\ket{\pi}.$ Let $\tilde{\pi}$ denote the conditional probability distribution obtained by post-selecting on elements in $M$ from $\pi,$ so \EQ{ \tilde{\pi} = \left\lbrace \begin{tabular}{cl}\vspace{0.1cm} $\dfrac{\pi_i}{\epsilon},$& $\textup{if} \ i\in M$\\ 0,& \textup{otherwise}, \end{tabular} \right. \label{EQ3} } with $\epsilon = \sum_{j \in M} \pi_j.$ Let $\ket{\tilde{\pi}} = U_P \sum_i \sqrt{\tilde{\pi}_i} \ket{i} \ket{0}$ denote the coherent encoding of $\tilde{\pi}.$ Note that the measurement of the first register of $\ket{\tilde{\pi}}$ outputs an element in $M$ with probability 1. Thus the capacity for preparing this state implies that the desired element from $M$ can be found, directly by measurement. As it was shown in \cite{2011_Magniez_SIAM}, applications of $Z_M $ and $R(P)$ leave the register state in the two-dimensional subspace $\textup{span}( \{ \ket{\pi}, \ket{\tilde{\pi}} \})$ and moreover, using $\tilde{O}(1/\sqrt{\epsilon})$ applications of the two reflections will suffice to produce a state $\ket{\psi} \in \textup{span} \{ \ket{\pi}, \ket{\tilde{\pi}} \}$ such that $| \bra{\psi} \tilde{\pi} \rangle |^2$ is a large constant. Measuring the first register of such a state will result in an element in $M$ with a constant probability, which means that iterating this process $k$ times ensures an element in $M$ is found with an exponentially increasing probability in $k$. However, since the state $\ket{\psi}$ is also in $ \textup{span} \{ \ket{\pi}, \ket{\tilde{\pi}} \},$ it is easy to see that the measured outcome, conditional on being in the set $M$, will indeed be distributed according to $\tilde{\pi}$. In our recent work \cite{2014_Paparo}, and also in \cite{2010_Krovi}, these results were used to produce a sample from the truncated stationary distribution $\tilde{\pi},$ in time $\tilde{O}(1/\sqrt{\epsilon})\times \tilde{O}(1/\sqrt{\delta})$ where the $\delta$ term stems from the cost of generating the approximate reflection operator $ARO(P)$, and $\tilde{O}(1/\sqrt{\delta})$ corresponds to the number of iterations which have to be applied. This is a quadratic improvement relative to using classical mixing, and position checking processes which would result in the same distribution. However, the same process can be used \emph{in reverse} to generate the state $\ket{\pi}$ starting from some fixed basis state $\ket{i'} = U_{P} \ket{i} \ket{0} $ with cost $\tilde{O}(1/\sqrt{\delta}) \times \tilde{O}(1/\sqrt{\pi_i})$. Note that $\pi_i = |\bra{\pi} i' \rangle |^2$ is the probability of sampling the element $i$ from the distribution $\pi$. To see that this works, let $W_{tot}$ correspond to the product of all $R(P) Z_{\{i\}}$ reflections (so $\tilde{O}(1/\sqrt{\pi_i})$ of them) that need to be applied to find the element $i$. The correctness of the search algorithm then guarantees that the trace distance between the final state and the target state is a (small) constant $c$, so $1/2\| \dm{i'} - W_{tot}\dm{\pi} W_{tot}^\dagger \| \leq c$. But since the trace distance (and also fidelity) are preserved under unitary maps, and since $W_{tot}$ is unitary, we also have that $1/2\| W_{tot}^\dagger \dm{i'}W_{tot} - \dm{\pi} \| \leq c$. Thus the resulting state obtained by reversing the search process is constantly close to the state $\ket{\pi}.$ But then, the $\ket{\pi}$ projection measurement we described previously will recover (an arbitrary good approximation of) the $\ket{\pi}$ state with a constant probability. By iterating this entire process, should it fail (the iteration is possible, since we can generate $\ket{i}$ cheaply on demand), we will get the desired state $\ket{\pi}$ with exponentially increasing probability in the number of attempts. Such a process of recovering the state $\ket{\pi}$ corresponds to a classical mixing process. Classical mixing (for time-reversible Markov chains) can be achieved in $O(1/\delta \times \log(1/\pi_{min}))$ (ignoring error terms), whereas the quantum process terminates in $\tilde{O}(1/\sqrt{\delta} \times 1/\sqrt{\pi_{min}})$\footnote{We are ignoring the logarithmically contributing precision term $\log(1/error)$ in both cases.}, where $\pi_{min}$ denotes the smallest occurring probability in $\pi$, in the worst case. Hence we can see a quadratic improvement w.r.t the $\delta$ term in the quantum case. However, the scaling relative to the probability term $\pi_{min}$ constitutes an exponential slowdown relative to the classical mixing bounds, and this trade-off is prohibitive. We highlight that the approach we have just described for attaining stationary distributions by running hitting algorithms in reverse was first proposed by Richter \cite{2007_Richter}\footnote{The approach to quantum mixing we outline here was developed before the authors were aware of the observation by Richter, and independently from the paper \cite{2007_Richter}. During a more extensive literature review, the cited paper by Richter was identified as the, to our knowledge, the first paper to outline the idea as a comment in the preliminaries section.}, extending on observations by made by Childs~\cite{2004_Childs, 2007_Richter}. The basic idea of this work will be to ensure that the choice of the initial seed state $\ket{i}$ is in fact the best possible. However, the best possible situation can still be to costly as the highest probability may still be as small as $1/N$, as is the case for the uniform distribution. In these cases there is a more efficient way to prepare the initial state, which we clarify next. \subsection{Preparation from the uniform distribution} As we have described previously, the access to the $W(P)$\footnote{More precisely, we require a controlled variant of the $W(P)$ operator.} operator allows us to perform a projective measurement to the state $\ket{\pi}.$ Thus, if we prepare the coherent encoding of the uniform distribution state $\ket{u} = U_{P} \left(1/\sqrt{N} \sum_{i} \ket{i} \ket{0} \right)$, simply by performing the $\ket{\pi}$ projective measurement on it, we still have the probability $F(\ket{u}, \ket{\pi}) = |\bra{u} \pi \rangle|^2 $ of collapsing to the correct state. By repeating this process until we succeed, we obtain a preparation algorithm with expected running time of $\tilde{O}(1/\sqrt{\delta}) \times O(1/F(\ket{u}, \ket{\pi})).$ However, we can improve on this by ``Goverizing" this process, that is, by using amplitude amplification \cite{2000_Brassard}. This amounts to reflecting over $\ket{u}$ and $\ket{\pi}$ iteratively, starting from $\ket{u},$ until we reach a state close to the target state $\ket{\pi},$ with an overall cost $\tilde{O}(1/\sqrt{\delta}) \times O(1/\sqrt{F(\ket{u}, \ket{\pi})})$. We use the generalizations of this approach in \cite{2015_Dunjko2} to generate coherent encodings of stationary distributions in the cases where the shape of the target distribution is to some extent known. For the purposes of this paper, however, we will only require unsearching from the uniform and from Kronecker-delta distributions. The preparation method starting from the uniform distribution, and also the unsearch approach from a fixed state, are a special cases of the more general amplitude amplification protocol we have just described. The two methods, unsearching and preparation from uniform, of preparing the state $\ket{\pi}$ are complementary, in the sense that the latter method is more efficient when the stationary distribution is close to uniform, where the unsearching becomes efficient when an element has a high probability (roughly, when the distribution is far from uniform). Our overall approach we present next will use both methods for preparation, and provide a method for identifying the right candidates (elements with the highest probability in $\pi$) for the unsearching approach. In what follows, we will say that the (coherent encoding of) distribution $\pi$ is \emph{far from uniform}, if $F(\ket{\pi}, \ket{u})\geq 1/\sqrt{N},$ and otherwise, we will say the distribution (equivalently, its coherent encoding) is \emph{close to uniform.} \section{The protocol} \label{Sect4} We will first establish the notation for the remainder of the paper. A given element of a sequence we are referring to, will, in the remainder of the paper, be specified by a superscript in the cases of transition matrices and spectral gaps, e.g. $P_t, \delta_t$ for the $t^{th}$ element. In the case of distributions, we will use parentheses (e.g. $\pi(t)$), since we have reserved the subscripts to denote a particular probability in a given distribution. We proceed by formally specifying the setting we consider. We assume we are, at each time-step $t,$ given the Szegedy walk operators $W(P_t),$ associated with a sequence of time-reversible Markov chains $\{ P_t \}_{t=1}^{\infty}$ over the same state space of $N$ elements, along with each spectral gap $\delta_k$\footnote{Effectively, we only require a sensible lower bound on the spectral gap.}. The task is, at each time-step $t$, to generate the coherent encoding of the stationary distribution $\ket{\pi(t)}$, with cost in $\tilde{O}(N^{1/4}/\sqrt{\delta_t}).$ To achieve this, we require further assumptions, namely that the Markov chains are slowly-evolving. More precisely, we require that the stationary distributions $\pi(t), \pi(t+1)$ of neighboring Markov chains $P_t, P_{t+1}$, respectively, are sufficiently close in terms of the fidelity of their coherent encodings. That is, we require that $F(\ket{\pi(t)},\ket{\pi(t+1)})~\geq~\eta$, where $\eta>0$ is a real constant independent from the spectral gaps, and the state space size. Moreover, we will require that the spectral gaps $\delta_t, \delta_{t+1}$ of neighboring chains $P_t, P_{t+1}$ are relatively close, in the sense which we will specify later. As we will explain, this last assumption is not vital, but allows for a more convenient statement of the main result. Finally, we will assume that the coherent encoding of the stationary distribution $\ket{\pi(1)}$ of the first Markov chain is easy to generate. These assumptions are essentially equivalent to the assumptions in \cite{2008_Somma_PRL, 2008_Wocjan}. However, as we have clarified, in contrast to those works, in our result, at each time-step $t,$ the stationary distribution can be prepared \emph{de novo}, that is without using any quantum memory from step $t-1$, with cost $\tilde{O}(N^{1/4}/\sqrt{\delta_t}).$ This, for instance implies that multiple copies can be generated at each time-step as well, if desired, without having to re-run the entire sequence of Markov chains. Moreover, our approach does not depend on the length of the sequence, as each stationary distribution is prepared ``on the fly'', independently from the quantum states utilized in previous steps. Both properties are vital in the context of active learning agents that we have mentioned previously. To explain how our protocol works, we will describe two particular settings where the cost of preparation of the encoding of the stationary distribution $\ket{\pi}$ of an $N-$state Markov chain $P$ with spectral gap $\delta$ is in $\tilde{O}(N^{1/4}/\sqrt{\delta}).$ In the first setting the fidelity between the coherent encoding of the uniform distribution $\ket{u}$ and $\ket{\pi}$ is above $N^{-1/2}$. In this case, as we have shown, the preparation starting from uniform has the desired overall cost $\tilde{O}(F(\ket{u}, \ket{\pi})^{-1/2} \delta^{-1/2}) = \tilde{O}(N^{1/4}\sqrt{\delta}).$ In the second setting the stationary distribution $\pi$ of $P$ has the probability of a mode $\pi_{max}$ (the largest occurring probability) larger than $N^{-1/2}$, and the mode state itself $i_{max}$ is known. In this case, unsearching from the element $i_{max}$ will produce the target state with cost in $ \tilde{O}(1/\sqrt{\delta} \times 1/\sqrt{\pi_{max}}) =\tilde{O}(N^{1/4}\sqrt{\delta})$. Our first technical result shows that any Markov chain $P$ fits in one of the two settings above, which is captured by the following Lemma, proven in the Appendix. \begin{lemme}}\def\EL{\end{lemme} \label{fid:mode:bounds} Let $\pi$ be a distribution over $N$ states, such that $F(\ket{u}, \ket{\pi}) \leq 1/\sqrt{N}.$ Then $\max_{i} \pi_i \geq 1/\sqrt{N}$. Moreover, if $\max_{i} \pi_i < 1/\sqrt{N},$ then $F(\ket{u}, \ket{\pi}) \geq 1/\sqrt{N}. $ \EL The lemma above has a few immediate consequences. First of all, if we are given a Markov chain $P,$ (over $N$ states) with known $\delta$, the mode $i_{max}$ of the corresponding stationary distribution $\pi,$ along with the probability of the mode $\pi_{i_{max}}$, then it is clear that we can prepare the stationary distribution within cost $\tilde{O}( N^{1/4}/\sqrt{\delta})$: if $\pi_i \geq 1/\sqrt{N},$ we use the ``unsearch from $\ket{i}$" approach. If it is not, then by the second claim of Lemma \ref{fid:mode:bounds}, we know that we can prepare the initial state by the preparation from the uniform distribution within cost $\tilde{O}( N^{1/4}/\sqrt{\delta})$. It is also easy to see that the assumption of knowing the probability of the mode $\pi_i$ is actually not needed. One can first attempt the preparation from the uniform distribution a suitable number of times, where the number of reflections used is upper bounded by $O(N^{1/4})$\footnote{More precisely, we would use the use a randomized approach as presented in \cite{1998_Boyer}, which only requires a lower bound. We note that the approach of \cite{1998_Boyer} can be applied if a lower bound is known, but also the upper bound should not surpass $1/4$. This is achieved by directly performing the $\ket{\pi}$ projective measurement on the uniform distribution state a couple of times. If it succeeds, we are done, should it fail, we can conclude that the overlap is below $1/4$, as required, except with an exponentially decaying probability in the number of attempts. The same approach, albeit applied to the task of element finding, was first suggested in \cite{2011_Magniez_SIAM} .} - if the target distribution closer than $1/\sqrt{N}$ to the uniform distribution, in terms of the fidelity, then this will succeed with exponentially high probability in the number of attempts. If all attempts fail, we are (except with exponentially small probability) then sure we are in the regime where the mode has a probability higher than $1/\sqrt{N},$ and this is all we need to know. Then, the unsearching approach, starting from the mode $i_{max}$ will (with high probability) produce the target state if we employ $O(N^{1/4})$ iterations, so with overall cost $\tilde{O}( N^{1/4}/\sqrt{\delta})$. We will take care of the failure probability of this approach later. However, even the assumption that the mode (but not the probability of the mode) is known is most often too strong to be justified. Nonetheless, if we are dealing with a scenario in which we have a sequence of Markov chains, such that a) the stationary distributions of consecutive Markov chains are sufficiently close, and b) the first Markov chain has a known, easy to prepare stationary distribution, then we can recover the same results without the need to explicitly find a mode. To illustrate how this is achieved, consider the setting of just two Markov chains, $P_1$ and $P_2$, (with corresponding stationary distributions $\pi(1)$, $\pi(2)$, such that $\ket{\pi(1)}$ is easy to prepare. By easy to prepare we mean within the cost $\tilde{O}( N^{1/4}/\sqrt{\delta_1}),$ so it will, for instance, suffice that we know the mode of $\pi(1)$ and it is above $1/\sqrt{N},$ or that the fidelity (relative to the uniform distribution) is above $1/\sqrt{N}$. To prepare the (coherent encoding of the) stationary distribution of $P_2$, we first proceed with the attempt to recover it by unsearching from the uniform distribution. If this succeeds, we are done. If this approach should fail, we proceed as follows: we first prepare $c' \in \mathbbmss{N}$ copies of the state $\ket{\pi(1)}$, where $c'$ is a (small) confidence parameter. Recall, we have assumed the stationary distributions of $P_1$ and $P_2$ are close, so we will have that $F(\ket{\pi(1)}, \ket{\pi(2)}) \geq \eta,$ where $\eta$ is some (large) constant. This implies that a projective measurement on the state $\ket{\pi(2)}$ of the state $\ket{\pi(1)}$ will succeed with average probability $\eta$. This measurement has cost $\tilde{O}(1/\sqrt{\delta_2}),$ so with overall cost $\tilde{O}(c'/\sqrt{\delta_2} ) $ we can prepare, on average, $c=\eta c'$ copies of the state $\ket{\pi(2)}$\footnote{We note that if $\eta$ is very small (but the assumption is it is independent from $N$ and the spectral gaps) we can do better by utilizing quantum amplitude amplification \cite{2000_Brassard} again - given the initial state $\ket{\pi(1)}$, by using the reflection over it, and the reflection over $\ket{\pi(2)}$, we can obtain the target state $\ket{\pi(2)}$ with a quadratic smaller cost with respect to $\eta$. However, since for this work we assume $\eta$ is constant this still yields the same overall scaling.}. In the actual protocol, we will iterate the preparation until we do have $c$ copies, and $c'$ above then establishes the expected number of iterations. Next, we simply measure (the first register of) all of the $c$ copies of the state, obtaining $c$ independent single element samples from the distribution $\pi(2)$. As it turns out, this is sufficient for the task at hand. If the fidelity of $\ket{\pi(2)},$ relative to the uniform distribution state $\ket{u}$, is below $1/\sqrt{N}$\footnote{This is the case, except with very small probability, since we assume that the approach of preparation from the uniform distribution had failed.}, then with probability $1-2^{-c}$, at least one state $i$, out of the $c$ independently sampled states, has the corresponding probability $\pi(2)_i \geq{1/{(4 \sqrt{N})}}$. This result is captured by the following Lemma, and proven in the Appendix: \pagebreak \begin{lemme}}\def\EL{\end{lemme}\label{main:lemma} Let $\pi$ be a distribution over $N$ states, and let $F(\ket{\pi}, \ket{u}) \leq 1/\sqrt{N}.$ Then there exists a set of indices $S \subseteq \{1, \ldots, N\}$ such that the two following properties hold: \begin{itemize} \item $\min_{i \in S} \pi_i \geq \dfrac{1}{4 \sqrt{N}}$ and \item $P(S) = \sum_{i\in S} \pi_i \geq \dfrac{1}{2}.$ \end{itemize} \EL As the next step, we simply sequentially attempt to prepare the target state through unsearching from the sampled states and employing $O(N^{1/4})$ iterations of the reflections. With probability at least $1-2^{-c}$, one of the attempts will succeed. What we have shown is that having a collection of $c$ independent single element samples from $\pi(2)$ suffices to efficiently prepare $\ket{\pi(2)},$ in the regime where the preparation from uniform distribution would not be efficient. From these observations, the presented approach for two Markov chains inductively extends to the setting with a sequence of Markov chains that we wish to consider. We now give the full protocol, along with a more rigorous analysis. In what follows, we will be assuming all the approximate reflection operators are in fact exact, and we will deal with the errors induced by approximations later. The protocol will use two subroutines. The subroutine \textit{PrepareFromUniform(c)} attempts the preparation from the uniform distribution, using $O(N^{1/4})$ reflections. If the target distribution state close to the uniform distribution state (in the sense we defined previously), then by utilizing the randomized approach \cite{1998_Boyer} we will obtain the target state except with probability below $1/2$\footnote{Here we again, as a technical point, assume that we have eliminated the possibility that the overlap between the uniform distribution and the target distribution is \emph{over} (or equal to) 1/4, by attempting direct projective measurements first. For this, it will suffice to attempt the projective measurement $3c$ times - failing to generate the target state, if the fidelity is above or equal to 1/4, will then occur with probability below $(3/4)^{3c} \leq 2^{-c} $. Since the cost of the projective measurements does not depend on $N$, we may ignore this in the complexity analysis.}. We will, for this subroutine, allow for $c$ attempts to prepare the target distribution. Then we will succeed, whenever the fidelity relative to the uniform distribution state is above $N^{-1/2},$ except with probability $2^{-c}.$ The output of this subroutine is either the coherent encoding of the stationary distribution, or ``unsuccessful" - a flag indicating that the preparation failed and that the target distribution is far from uniform, except with small probability. The cost of this procedure is $\tilde{O}(c\, N^{1/4}/\sqrt{\delta_t}),$ at time step $t$. The second is the \textit{PrepareSamples(c)} subroutine. In the context of the overall protocol, we will make sure that, at each step we generate in total $c$ elements sampled from the target distribution. One of these is output, and all are saved, in the case we need them for the next step. The $PrepareSamples(c)$ subroutine, used at time-step $t>2,$ back-tracks to the previous step, and first prepares the coherent encoding for the previous step $\ket{\pi(t-1)}.$ Depending on whether the previous stationary distribution is close or far from the uniform (that is, closer or further than $1/\sqrt{N},$ in terms of the fidelity with the uniform distribution) for this we may require $c$ samples from the previous distribution itself. As we have clarified, we will make sure we always have those in the overall protocol. Given the $c$ samples for the previous step, the encoding $\ket{\pi(t-1)}$ can be generated with cost $\tilde{O}(c\, N^{1/4}/\sqrt{\delta_{t-1}}),$ except with probability $2^{-c}$ by Lemma \ref{main:lemma} (in the case we accidentally have bad samples), either by using the preparation from the samples, or by preparing from the uniform. Following this, on the state $\ket{\pi(t-1)}$ we apply a $\ket{\pi(t)}-$ projective measurement (with cost $\tilde{O}(1/\sqrt{\delta_t})$) and with probability $\eta$ we succeed to project onto $\ket{\pi(t)}$. This process is repeated until $c$ copies of $\ket{\pi(t)}$ are generated, and they may be immediately measured. One of the sampled elements (measurement outcomes) is output, and the other $c$ sampled elements are stored for future use by the $PrepareSamples$ subroutine. The situation is analogous in the case the previous distribution was prepared from the uniform. We highlight that, irrespective of the method we used at time step $t-1$, $PrepareSamples(c)$ will attempt to regenerate the states $\ket{\pi(t-1)}$ by using the original approach first, but, if that should fail, it will switch to the alternative\footnote{Note note that switch from the samples approach to the preparation from uniform approach is always possible, and the switch from the uniform to the samples approach is possible because we will always, regardless of the regime, prepare and store $c$ independently sampled elements.}. In the case $PrepareSamples(c)$ is run at time-step $t=2,$ the procedure is analogous as above, with the difference that, by assumption, we can cheaply generate the required encodings $\ket{\pi(1)}$ of the previous step. This subroutine has expected running time $\tilde{O}(\eta\ c\ N^{1/4}/\sqrt{\delta_{t-1}} ),$ and a failure probability $2^{-c}$. Since we do not consider the scaling in $\eta$, we obtain $\tilde{O}(c\, N^{1/4}/\sqrt{\delta_{t-1}} ).$ Now we can give the protocol, where $t$ denotes the time-steps: \vspace{0.5cm} \noindent \textbf{The protocol}\\ \begin{enumerate} \item If $t=1,$ prepare the corresponding coherent encoding of the stationary distribution, measure, and output the outcome. Keep the operator $W(P_1)$ (and $\delta_1$) in memory for one additional time-step. \item If $t>1,$ execute $PrepareFromUniform(2c),$ $c$ times. If each run generated the target distribution, save $c$ sampled elements for future use, and output one as the current output. If any run returns ``unsuccessful", abort, and run $PrepareSamples(c).$ In both cases replace the stored operator $W(P_{t-1})$ with the current $W(P_{t})$ (also $\delta_{t-1}$ with $\delta_{t}$,) and proceed to the next time-step. \end{enumerate} \subsection{Protocol analysis} First, we analyze the protocol under the assumption that the realized approximate reflection operators are perfect. In this case, the protocol above has, at each time-step $t$ (for $t>1$,) the expected running time in $\tilde{O}(2c^2 N^{1/4}/\sqrt{ \min \{\delta_{t-1},\delta_{t} \} } ) = \tilde{O}( c^2\ N^{1/4}/\sqrt{ \min \{\delta_{t-1},\delta_{t} \} } ) $, where $c$ is a confidence parameter, as this expression is the maximum of the costs of both possible preparation subroutines. If we have the assumption that the neighboring spectral gaps $\delta_{t-1}$ and $\delta_{t}$ are multiplicatively close, meaning that there exists a constant $\kappa \in \mathbbmss{R}^{+}$ (independent from $N$) such that for all $t>1$ we have that \EQ{ \delta_{t-1}/\kappa \leq \delta_{t} \leq \kappa \delta_{t-1}, } then the cost of preparation is in $\tilde{O}( c^2\, N^{1/4}/\sqrt{\delta_t})$ for each $t$, which was the desired cost. The protocol can, however, fail with probability $O(2^{-c}),$ which we clarify next. First, note that the $PrepareFromUniform(c)$ subroutine may fail - that is, report "unsuccessful", although the distribution is in the right regime (close to uniform). In our protocol, we call this subroutine $c$ times, with parameter $2c.$ This entire iteration fails if at least one of the runs reported ``unsuccessful", although the target distribution was close enough to the uniform distribution. If the target distribution is in the required regime, $PrepareFromUniform(2c)$ run once reports ``unsuccessful" with probability $2^{-2c}.$ The probability at least one ``unsuccessful" report in a sequence of $c$ runs is then given with $1-(1-2^{-2c})^{c} = 1-(1-4^{-c})^{c}$. However, we have that $ 1-(1-4^{-c})^{c} \leq 2^{-c}, $ which we here prove for completeness. We have that \EQ{ 1-(1-4^{-c})^{c} \leq 2^{-c} \Leftrightarrow (1-4^{-c})^{c} \geq 1-2^{-c}. } For the expression $(1-4^{-c})^{c}$ we have, by the Bernoulli's inequality, that $(1-4^{-c})^{c} \geq 1- c\, 4^{-c},$ so it will suffice to show that $1- c\,4^{-c} \geq 1-2^{-c}$, which is equivalent to $c \leq 2^{c},$ which is true. Thus in our protocol, failure to prepare the required $c$ independently sampled elements, in the case the distribution is sufficiently close to the uniform distribution, occurs at most with probability $2^{-c}$. If the distribution is not close to uniform, we may end up running the $PrepareSamples(c)$ subroutine, which will attempt the preparation of the $c$ samples, by regenerating the encodings of the stationary distributions of the previous step. For this, it may utilize either the $c$ samples from that distribution or attempt preparation from the uniform distribution state, and in the worst case, it will attempt both. Since the target distribution must be in one of the two regimes, and since both cases have a failure probability of $2^{-c},$ this also gives the overall failure probability. Hence, we have shown that our protocol, under the assumption that all the reflection operators (and measurements) are perfect, generates a sample from (or a coherent encoding of) the target stationary distribution, with cost in $\tilde{O}(c^2\ N^{1/4}/\sqrt{\delta_t} ),$ with a failure probability in $O(2^{-c}).$ In the real protocol, the reflection over the target state $\ket{\pi}$ is not ideal (as we only achieve an approximation of the reflection) and neither is the $\ket{\pi}$ projective measurement. Taking into account the effects of these imperfections, we obtain the expected run-time of $\tilde{O}(c^3\ N^{1/4}/\sqrt{\delta_t} ),$ with the same failure probability in $O(2^{-c}).$ Analysis of this is provided in the Appendix. We finish of this section with a comment on how total failure can be dealt with, when failure is not an option. In the context of (effectively) infinite sequences of Markov chains, the exponentially unlikely failure will still occur. In this case, if we are required to proceed although the protocol failed at time-step $t$, one can always prepare a sufficient number of samples from $\ket{\pi_t}$ in time $\tilde{O}(N^{1/2}/\sqrt{\delta_t} ),$ by forcing the preparation from the uniform distribution. Although this constitutes a quadratic slowdown (w.r.t. the state space size), it will only occur exponentially rarely, which means that, at least the average preparation cost for each time step can be kept arbitrarily close to $\tilde{O}( N^{1/4}/\sqrt{\delta_t} ).$ \section{Discussion} \label{Sect5} We have presented a quantum algorithm for sequentially generating stationary distributions of an arbitrarily large sequence of Markov chains. The quantum algorithm outperforms classical approaches whenever the spectral gaps $\delta$ of the Markov chains are below $1/\sqrt{N}$, where $N$ is the size of the state space. In contrast, straightforward application of the``mixing by reverse hitting" approach would yield improvements only in a quadratically more stringent regime where $\delta < 1/N$. The basic observation we have used is that the bottle-neck of direct mixing by running hitting algorithms in reverse, can be ameliorated when only a small number of elements sampled from the target distribution are available beforehand. We have shown that this can guarantee that the initial state of the unsearch approach is far from the worse case setting. Following this, we have shown how these samples can be made available in the context of slowly evolving Markov chains. As we have clarified, the presented algorithm has an immediate application in a recent approach to (quantum) artificial intelligence \cite{2014_Paparo}, but it may be useful in other context as well. For instance, it may offer improvements for problems stemming from statistical physics. One application could be in the case when strictly independent samples from Gibbs distributions of physical systems are required in a large range of temperatures, which include the computationally difficult low-temperature regimes. Other applications may be possible as well, for instance in applications where subsequent Markov chains may depend on the actual outputs of previous mixing steps. In this case, quantum-enhanced classical annealing methods become unsuitable, as they need to keep coherence through the protocol steps \cite{2009_Wocjan}. As a feature of our protocol, we point out that at each time step can output not just a classical sample from the target stationary distribution, but a coherent encoding of this distribution. This is not a guaranteed characteristic of quantum mixing protocols \cite{2007_Richter}, and makes our approach suitable for combining with other quantum protocols which start from such a coherent encoding \cite{2011_Magniez_SIAM, 2014_Paparo, 2010_Krovi}. In the protocol we have presented, as in other related works, it is always assumed that aside from the Markov chains themselves, one also has access to the values of the spectral gaps. This is potentially a problematic assumption, since, at least in the general cases, spectral gaps are often difficult to determine. Consequently, methods which do not rely on good lower bounds of the spectral gaps, or, more precisely, which can adaptively estimate the changes in spectral gaps in the context of slowly evolving sequences, are part of ongoing work. \noindent\textbf{Acknowledgments:\\} The authors acknowledge the support by the Austrian Science Fund (FWF) through the SFB FoQuS F4012, and the Templeton World Charity Foundation grant TWCF0078/AB46. VD thanks G. D. Paparo for initial discussions. \section{Appendix} In this section we prove the technical lemmas from the main body of the paper, which we repeat for the benefit of the reader. Following this, we provide an analysis of our protocol covering the imperfections in the reflection operators. \noindent\textbf{Lemma} \ref{fid:mode:bounds}. \emph{Let $\pi$ be a distribution over $N$ states, such that $F(\ket{u}, \ket{\pi}) \leq 1/\sqrt{N}.$ Then $\max_{i} \pi_i \geq 1/\sqrt{N}$. Moreover, if $\max_{i} \pi_i \leq 1/\sqrt{N},$ then $F(\ket{u}, \ket{\pi}) \geq 1/\sqrt{N}. $ } \proof Assume first that $\max_{i} \pi_i \leq 1/\sqrt{N}.$ Then, we ask what distribution $\pi$ minimizes the fidelity, relative to the uniform distribution, satisfying the given constraint on the mode(s). We claim that the distribution which minimizes the fidelity is the distribution $\pi$ (up to permutation of the probabilities, which does not change the overlap with the uniform distribution) defined as follows. Let $p_{max} = 1/\sqrt{N}$ and $k = \left\lfloor \dfrac{1}{p_{max}} \right\rfloor$. For all $i$ such that $1 \leq i \leq k$ we set $\pi_i = \pi_{max}.$ Furthermore, we set $\pi_{k+1} = (1 - k \ p_{max}). $ Finally, for all remaining states we set $\pi_{i\geq k+1} = 0.$ To see this is the case, first note that the permutation of the probabilities does not change the overlap with the uniform distribution. Thus it will suffice to consider distributions whose probabilities are ordered in a decreasing order according to the indices. We will call such distributions decaying distributions \cite{2015_Dunjko2}. Next, we will say that the decaying distribution $\rho$ is obtained from the decaying distribution $\gamma$ \emph{by separating the probabilities of elements} $i$ \emph{and} $j$ \emph{in} $\gamma,$ (for $i<j$) if the following holds: $\gamma_k = \rho_k$ for all $k \not=i$ and $k\not=j$, and $\gamma_i \leq \rho_i$ and $\gamma_j \geq \rho_j$. Intuitively, to obtain $\rho$ from $\gamma$ we simply shift a part of the mass of the probability at state $j$ to the state at $i$ while maintaining the order. Next, note that the distribution $\pi$ is the extreme point of such a probability separation process, for all decaying distributions satisfying the constraint on the probability of the mode: $\pi$ can be obtained by iterating this process from any decaying distribution $\sigma$, which satisfies the constraint $\max_{i} \sigma_i \leq 1/\sqrt{N}$. For completeness, we illustrate why this works. For instance, by starting from the smallest non-zero probability element $i$ in $\sigma$, decreasing it, while increasing the largest probability element in $j$ in the distribution $\sigma$ which is smaller than $p_{max}$ until the modified value of $\sigma_j$ equals $p_{max}$, or until we deplete $\sigma_{i}$. By iterating this procedure, in a finite number of steps we will have reached $\pi$. Next, we claim that if the decaying distribution $\rho$ is obtained from the decaying distribution $\gamma$ by separating the probabilities of elements $i$ and $j$, then $F(\ket{\gamma} ,\ket{u}) \leq F(\ket{\rho} ,\ket{u}).$ This follows from the convexity of the fidelity relative to the uniform distribution: since we are only changing the probabilities of the elements $i$ and $j$, the distance from the uniform distribution (fixing all other parameters) is up to squaring proportional to $f(p_i, p_j) = \sqrt{p_i} + \sqrt{p_j},$ where $p_i+ p_j$ is constant. This function clearly decreases as $p_i$ grows at the expense of $p_j$. But since $\pi$ is the extremal point of the process of separating the probabilities (under the constraint that $\pi_{max} \leq p_{max}$), the distribution $\pi$ as defined minimizes the fidelity under the given constraint on the mode of the distribution. The fidelity between $\ket{\pi}$ and $\ket{u}$ is now easy to compute: We have that $F(\ket{\pi}, \ket{u}) = \dfrac{1}{N} \vert \sum_{i} \sqrt{\pi_i} \vert^2,$ and we will evaluate $f(\pi) \mathop{:} = \sum_{i} \sqrt{\pi_i}.$ We have that \EQ{ f(\pi) = \left\lfloor \dfrac{1}{p_{max}} \right\rfloor \sqrt{p_{max}} + \sqrt{1 - \left\lfloor \dfrac{1}{p_{max}} \right\rfloor p_{max}} \label{Eq-first}. } This expression can be further simplified. In the following, let for $x\in \mathbbmss{R}^+$, $\{ x\} = x - \lfloor{x} \rfloor$ denote the fractional part of $x$. Then we have: \EQ{ \left\lfloor \dfrac{1}{p_{max}} \right\rfloor \sqrt{p_{max}} + \sqrt{1 - \left\lfloor \dfrac{1}{p_{max}} \right\rfloor p_{max}} \\ =(\dfrac{1}{p_{max}} - \{\dfrac{1}{p_{max}}\})\sqrt{p_{max}} + \sqrt{1 - (\dfrac{1}{p_{max}} - \{\dfrac{1}{p_{max}}\})p_{max}} \\= \dfrac{1}{\sqrt{p_{max}}} - \{\dfrac{1}{p_{max}}\}\sqrt{p_{max}} + \sqrt{ \{\dfrac{1}{p_{max}}\} p_{max} } \\= \dfrac{1}{\sqrt{p_{max}}} + \sqrt{p_{max}}( \sqrt{ \{\dfrac{1}{p_{max}}\} } - \{\dfrac{1}{p_{max}}\}). } Since the fractional part is always between 0 and 1, and since on that interval it holds that $\sqrt{x}>x,$ we have that the expression $( \sqrt{ \{\dfrac{1}{p_{max}}\} } - \{\dfrac{1}{p_{max}}\})$ is always non-negative, the minimum is zero, and the maximum reached at $x=1/4$ where it reaches the value $1/4$. Thus we have $f(\pi) \geq 1/\sqrt{p_{max}}$ so \EQ{ F(\ket{\pi}, \ket{u}) = \dfrac{1}{N} f(\pi)^2 \geq \dfrac{1}{N}\dfrac{1}{\sqrt{p_{max}}^2} = \dfrac{1}{N p_{max}} = \dfrac{1}{\sqrt{N}} \label{Eq-last}. } This proves the second direction of the lemma. By taking the contrapositive of the second direction we immediately obtain \EQ{ F(\ket{u}, \ket{\pi}) < 1/\sqrt{N} \Longrightarrow \max_{i} \pi_i > \dfrac{1}{\sqrt{N}}. } For the case that $F(\ket{u}, \ket{\pi}) = 1/\sqrt{N},$ by similar arguments as before, we get $\pi_{max} \geq 1/\sqrt{N},$ so the Lemma holds. \qed Next, we prove Lemma \ref{main:lemma}. For convenience we will rephrase it in terms of the function $f$ defined as $f(\pi) \mathop{:} = \sum_{i} \sqrt{\pi_i},$ which is up to a square proportional to the fidelity w.r.t. the uniform distribution: \EQ{F(\ket{\pi}, \ket{u})= \dfrac{1}{N}f(\pi)^2.} \vspace{0.5cm} \noindent \textbf{Lemma} \ref{main:lemma} (rephrased). \emph{Let $\pi$ be a distribution and let $f(\pi) \leq N^{1/4}.$ Then there exists a set of indices $S \subseteq \{1, \ldots, N\}$ such that the two following properties hold: \begin{itemize} \item $\min_{i \in S} \pi_i \geq \dfrac{1}{4 \sqrt{N}}$ and \item $P(S) = \sum_{i\in S} \pi_i \geq \dfrac{1}{2}.$ \end{itemize}} \proof Assume the lemma does not hold, that is, for every $S \subseteq \{1, \ldots, N\}$ either $\min_{i \in S} \pi_i < \dfrac{1}{4 \sqrt{N}}$ and/or $ \sum_{i\in S} \pi_i < 1/2.$ Let $S$ be the set of all the indices of all probabilities occurring in $\pi,$ which are larger or equal to $\dfrac{1}{4 \sqrt{N}} $. Note that by Lemma \ref{fid:mode:bounds}, since $f(\pi) \leq N^{1/4} \Leftrightarrow F(\ket{u},\ket{\pi}) \leq 1/\sqrt{N}$, there exists at least one probability larger or equal to $\dfrac{1}{\sqrt{N}},$ thus the set $S$ is non-empty and $P(S)>0$. For this lemma to be false, it then must hold that $ \sum_{i\in S} \pi_i < \dfrac{1}{2}.$ But then, for the complement set of indices $S^{C} = \{1, \ldots, N \} \setminus S$ the following holds: \EQ{P(S^C) = \sum_{i\in S^C} \pi_i \geq 1/2 , \label{eq:max1}} and \EQ{\max_{i\in S^C} \pi_i < \dfrac{1}{4 \sqrt{N}}.\label{eq:sum1}} Note that, by the assumptions of the Lemma it holds that \EQ{ \sum_{i \in S} \sqrt{\pi_i} + \sum_{i \in S^C} \sqrt{\pi_i} \leq N^{1/4}, \label{contr} } and, as we have seen, $\sum_{i \in S} \sqrt{\pi_i} >0$. Now, consider the renormalized distribution $\tilde{\pi},$ where all probabilities corresponding to elements in $S$ are set to zero. By Eq. (\ref{eq:max1}), the renormalization factor is below 2. Then, since $\max_{i\in S^C} \pi_i < \dfrac{1}{4 \sqrt{N}}$ it holds that $\max_i \tilde{\pi}_i < \dfrac{1}{2 \sqrt{N}}.$ Finally, we proceed analogously to the proof of the second direction of the first Lemma (Eq.~(\ref{Eq-first}) to Eq.~(\ref{Eq-last})) to find a bound on the $f(\tilde{\pi})$ function under the constraint that $\max_i \tilde{\pi}_i \leq \dfrac{1}{2 \sqrt{N}}.$ We obtain \EQ{ f(\tilde{\pi}) \geq \dfrac{1}{\sqrt{1/(2\sqrt{N})}} = \sqrt{2 \sqrt{N}}, } which implies that $\sum_{i \in S^C} \sqrt{\pi_i} \geq \dfrac{1}{\sqrt{2}} f(\tilde{\pi}) \geq N^{-1/4}.$ Since $f(\pi^{S})>0$ (strict inequality) we have the desired contradiction with Eq. (\ref{contr}) since $N^{1/4} + f(\pi^{S})$ is strictly larger than $N^{1/4}$. $\qed$ \paragraph{Analysis for imperfect reflection operators} Here we consider the propagation of errors when the reflection operator over the stationary distribution, and the $\ket{\pi}$ projective measurement, are approximate. Recall that both in the cases of the preparation from the uniform distribution, and in the cases of preparation from a given sampled element $i$, the precision of the approximation of the target state comes into play only logarithmically. More precisely, if $\epsilon$ is the desired bound on the trace distance between the realized distribution and the targeted distribution, and if $\xi$ is the fidelity between the initial state (uniform distribution or the given sample state $\ket{i}$), and the target state, then the total cost of the preparation procedure is given with $O\left(\sqrt{\delta^{-1}} \sqrt{\rho^{-1}} \left( \log\left(\epsilon^{-1}\right) + \log\left( \sqrt{\xi^{-1}} \right) \right) \right).$ In the last expression, the second log term compensates for the fact that an imperfect reflector will be applied $ \sqrt{\xi^{-1}}$ times, accumulating errors\footnote{We note that the errors stemming from the iterations of the approximate reflection operator can be further suppressed using more elaborate techniques, see \cite{2011_Magniez_SIAM} for further details.}. Thus, the precision of the approximation contributes only logarithmically in the overall complexity, even in the iterated setting. However, we must make sure that the inductive steps of our protocol, going from one time step to another, are not overly sensitive to small imperfections. There are two moments where the imperfections can cause problems. First, except for the first time-step, the $c$ samples we have stem not from the exact distribution, but rather the approximation. Second, in the generation of the $c$ samples at step $t$ we used an approximate projective measurement to go from an approximation of $\ket{\pi(t-1)}$ to an approximation of $\ket{\pi(t)}$, which succeeds with probability $\eta$ (the fidelity between the two states), only in the exact case. For the second problem, a simple way to bound the deviation on the success probability is by considering the ideal $\ket{\pi}$ projective measurement as a completely positive trace-preserving (CPTP) map $\mathcal{E}_{\ket{\pi}}$ which outputs just the success or failure status (since we care only about the perturbations of the success probabilities). So \EQ{ \mathcal{E}_{\ket{\pi(t)}} ( \dm{\pi(t-1)}) = \eta \dm{\textup{ok}} + (1-\eta) \dm{\textup{fail}}. } The approximate projective measurement (precise within $\epsilon$) can be represented in the same way by the map $\mathcal{E}^{\epsilon}_{\ket{\pi}},$ and we have that \EQ{ 1/2 || \mathcal{E}^{\epsilon}_{\ket{\pi}} (\rho) - \mathcal{E}_{\ket{\pi}} (\rho)|| \leq \epsilon } for any state $\rho$, where $||\cdot ||$ represents the standard trace norm on quantum states. The above holds for any pure state $\rho,$ so we get the above by triangle inequalities for arbitrary states. We point out that the claim holds when complete maps (which also output the heralded quantum state, not just the success/failure bit) are considered, but as tracing out only reduces trace distances this claim also holds. Note that we do not need to consider purified systems (nor completely bounded norms on the maps), for our problem. Then if $\mathcal{E}^{\epsilon}_{\ket{\pi(t)}} ( \dm{\pi(t-1)}) = \sigma \dm{\textup{ok}} + (1-\sigma) \dm{\textup{fail}},$ we have that \EQ{ 1/2|| \mathcal{E}_{\ket{\pi(t)}} ( \dm{\pi(t-1)}) - \mathcal{E}^{\epsilon}_{\ket{\pi(t)}} ( \dm{\pi(t-1)}) || = |\eta - \sigma| } but then also $|\eta -\sigma| \leq \epsilon.$ In the following, let $\rho_{\pi(t-1)}$ denote the $\epsilon-$close approximation of $\ket{\pi(t-1)}$ (in the trace distance), and let $\eta'$ be the success probability of the approximate projection measurement on the approximation $\rho_{\pi(t-1)},$ so \EQ{ \mathcal{E}^{\epsilon}_{\ket{\pi(t)}} ( \rho_{\pi(t-1)}) = \eta' \dm{\textup{ok}} + (1-\eta') \dm{\textup{fail}} } Then we have that \EQ{ |\eta - \eta'| = 1/2||\mathcal{E}^{\epsilon}_{\ket{\pi(t)}} ( \rho_{\pi(t-1)}) - \mathcal{E}_{\ket{\pi(t)}} ( \dm{\pi(t-1)}) || } and then by adding and subtracting $\mathcal{E}^{\epsilon}_{\ket{\pi(t)}} ( \dm{\pi(t-1)}) $, and by the triangle inequality we obtain \EQ{ |\eta - \eta'| \leq \epsilon + 1/2||\mathcal{E}^{\epsilon}_{\ket{\pi(t)}} ( \rho_{\pi(t-1)}) - \mathcal{E}^{\epsilon}_{\ket{\pi(t)}} ( \dm{\pi(t-1)}) ||, } which by the contractivity of CPTP maps yields $|\eta - \eta'| \leq 2 \epsilon$. Then by setting $\epsilon$ to $\eta/4$ we get that if $\eta' < \eta$ ( which is the problematic case) then $\eta' \geq \eta/2.$ In other words, as long as we make sure the error is below $\eta/4$ (which is still a constant), we are sure that the success probability of the approximate measurement on the approximate state is in the worse case halved. This constitutes only a a constant multiplicative increase in the run-time of our protocol, so the overall complexity expression is unchanged. The other problem we face in the light of the approximate nature of the operators we use, is that the $c$ sampled elements we obtain do not stem from the distribution $\pi,$ but an $\epsilon-$close approximation (in terms of the trace distance). To analyze the worst case scenario how this influences our protocol, we shall employ similar arguments as above. Note that the``preparation from $c$ samples" subroutine can be viewed as a CPTP map applied on $c$ mixed states, all encoding the underlying probability distribution, which outputs success (heralds that the preparation succeeded) , except with probability $2^{-c},$ if the target distribution is in the right regime, i.e. far from uniform. The $c$ mixed states are obtained by computational basis measurements of the ideal coherent encoding of the target probability distribution $\ket{\pi(t)}.$ In the non-ideal case, we have as input $c$ mixed states obtained by a computational-basis measurement of $c$ approximations, which are within $\epsilon$ distance from the ideal states. Since the trace distance can only decrease by measurements, and by its subaditivity w.r.t. tensor products, the total inputs, in the ideal and non-ideal case, differ by at most $c \epsilon$ (in the trace distance). But then the output of the procedures (hence, also the success probability) cannot differ by more than $c \epsilon.$ Thus we obtain that the failure probability for the non-ideal case is no greater than $2^{-c} + c\epsilon$. If we set $\epsilon = 2^{-2c},$ the failure probability is lower bounded by $2^{-c+1},$ which is obeys the same scaling. Since the error term $\epsilon$ appears logarithmically in the overall complexity, we get an additional multiplicative pre-factor of $\log(2^{2c})$ which is in $O(c)$. Then, the worst case complexity of our approach is given with $\tilde{O}(c^3\,\sqrt{\delta^{-1}} N^{1/4} ),$ with failure probability $2^{-c+1}.$ By adding one to all confidence parameters of the protocol, since $(c+1)^3 \in O(c^3)$, we obtain the cost in $\tilde{O}(c^3\, \sqrt{\delta^{-1}} N^{1/4} )$ and the same failure probability, as for the ideal reflectors case, of $2^{-c}$. \bibliographystyle{unsrt}
0912.0616
\section{Introduction} \label{intro} For more than ten years now, the appealing idea that our universe is a ``brane'' embedded in a higher dimensional bulk has been extensively explored, and widely accepted as a serious alternative to standard 4-dimensional cosmological models to solve the longstanding problems that prevents the complete understanding of our universe. Codimension-1 branes, since the seminal paper by Randall and Sundrum \cite{Randall:1999vf}, have been proved to be viable in allowing both a low-energy limit that mimics Newtonian gravity, and a compelling cosmological dynamics that can accommodate for inflation, match cosmological observations with respect to scalar and tensor perturbations, as well as opening novel possibilities for addressing both the initial singularity and the late time acceleration problem (for a review on these ad other aspects of Randall-Sundrum cosmology, see, for example, \cite{Langlois:2002bb,Maartens:2003tw}). Codimension-2 branes are, in some sense, even more interesting, but, unfortunately, also much less ``feasible''. In fact, the celebrated ADD mechanism \cite{ArkaniHamed:1998rs} to address the hierarchy problem is viable only with two (or more) extra dimension. In addition, a proposal has been put forward \cite{Carroll:2003db} that the vacuum energy of a codimension-2 brane can be ``off-loaded'' in the bulk. Brane geometry would stay flat, and the energy will generate a deficit angle in the codimension-2 bulk (thus generating a conical singularity at the origin). This idea could explain the absence of a cosmological constant originated from vacuum fluctuations in quantum field theory. In this approach, the current small value observed for the cosmological constant should be generated by some different mechanism, which can be provided by a generalization of this proposal, the so-called Supersymmetric Large Extra Dimension model (SLED) \cite{Aghababaie:2003wz}. The idea is that supersymmetry-breaking on the brane at high energy, which do not generate a vacuum energy because of the self-tuning property, induces a supersymmetry-breaking scale in the bulk at a much lower energy which depends on the size of the extra dimensions. The desired size to solve the hierarchy problem within the ADD framework provides an order of magnitude for the bulk SUSY breaking scale that generates, back on the brane, a cosmological constant with the correct order of magnitude. However, this proposal, even in the supersymmetric extension, has met severe criticism \cite{Garriga:2004tq}, because the self-tuning property rely on a tuning of the magnetic flux that stabilize the extra dimensions, which can not be kept stable under a phase transition on the brane. Even worse, it is not possible to accommodate on a codimension-2 brane any kind of energy-momentum tensor different from the pure tension \cite{Vinet:2004bk}. So, to study low energy limit, and possibly cosmology, one has to implement some kind of regularization of the 4-dimensional brane. Several regularization has been proposed \cite{Peloso:2006cq,Kaloper:2007ap,Burgess:2006ds,Burgess:2007vi,Kobayashi:2007qe,Burgess:2008yx}, in which linear analysis show that weak gravity has the tensor structure of general relativity, but with the presence of some long-range modulus which should disappear from the spectrum at the nonlinear level. The key point then, is how to describe a non-trivial dynamics of the regularized brane, and eventually how to derive from this description a viable 4-dimensional cosmology. This is, as one can imagine, a formidable task, because it would require to handle with the complete 6-dimensional dynamics. Some attempts have been made \cite{Papantonopoulos:2007fk,Minamitsuji:2007fx,Copeland:2007ur,Kobayashi:2007hf,Kobayashi:2008bm} using approximation to tackle the problem. In this paper we study cosmology of a regularized brane in a 6-dimensional conical bulk. The dynamical equations are obtained by letting the regularized brane move trough the bulk and implementing dynamical junction conditions \cite{Kraus:1999it,Kehagias:1999vr}. The model we present can be seen as a warped generalization of the conical codimension-2 brane of \cite{Kaloper:2007ap}, but, since we are interested in cosmology on the brane, we are {\it not} in need of explicitly adding an axion field to recover a 4-dimensional tensional brane (a similar contribution can be obtained by a particular choice of the parameters of the energy-momentum tensor of the brane, see section \ref{junction_sec}). Furthermore, the present model differs from the standard ``rugby-ball'' regularization because the radial extra-dimension is non-compact, and the space-time is ``capped'' only in the inner side of the bulk. This could be a great advantage, because the dynamical behavior of the model should have some peculiar features that cannot be obtained by a low energy KK reduction of a 6-dimensional theory, but could as well leads to completely unacceptable behavior in the regime in which the wrapping of the brane becomes too large. We simply avoid this problem assuming that the brane movement is always close enough to the inner cap (so that even the ``late intrinsic time'' regime must be assumed to respect this limit); this condition can be achieved by tuning conveniently the deficit angle to a near-critical value (which is the same request to obtain a correct tensorial structure in the linear approximation \cite{Kaloper:2007ap}). With this precaution, and under some assumptions described below, we find that unlike other models studied with similar techniques, 4-dimensional induced cosmology can mimics fairly well the standard cosmological model, with an initial singularity and accelerated expansion at late time. The latter is driven by an effective cosmological constant given by a particular combination of the bulk cosmological constant and the brane tension. The paper is organized as follows: In section \ref{static} we present the static solution and the set up of the 5-dimensional brane. Then, in section \ref{junction_sec} we derive the cosmological equation on the brane with the junction conditions. These equations are studied in section \ref{cosmol_sect}, with some assumptions that allows us to solve equations analytically and, where not possible, numerically. Finally, in section \ref{comm_concl} we comments on the results obtained in the previous section and draw our conclusions. \section{The static solution} \label{static} We consider a 6-dimensional space with cosmological constant, in which a 5-dimensional brane is embedded. We assume the brane to be tensional, and to have a curvature term, as well as matter, on it. The action that describes the model is \begin{equation} S = \int d^6 x \sqrt{-g} \left( \frac{M^4}{2}R - \Lambda_6 \right) - \int d^5 \xi \sqrt{\gamma} \left( \frac{M_5^3}{2} ~ {}^{(5)} R + \mathcal{L}_{brane} \right). \label{action} \end{equation} Capital latin indices ($A$, $B$ \ldots) run from $0$ to $5$ (so refer to bulk objects), while greek indices ($\mu$, $\nu$, \ldots) run from $0$ to $4$ (so refer to brane objects). The brane intrinsic metric is $\gamma_{\mu \nu}$. The equations of motion following from this action, far from the brane, are: \begin{equation} G_{AB} = -\Lambda g_{AB}, \label{vac_eq} \end{equation} with $\Lambda = \Lambda_6/M^4$. We do not consider the singular contribution coming from the brane in (\ref{vac_eq}), because we will take it into account later via the junction conditions. We seek for a metric which has a flat 4-dimensional submanifold, to be identified with our universe, and the two remaining dimensions having the geometry of a cone \cite{Kaloper:2007ap}. The metric is: \begin{equation} ds_6^2 = R^2 ds_4^2 + \left[ \frac{\Lambda}{10}\left( R^2 - \frac{\mu^5}{R^3}\right) \right]^{-1} dR^2 + \beta^2 \ell^2 \left( R^2 - \frac{\mu^5}{R^3}\right) d\chi^2. \label{vac_metric} \end{equation} with $ds_4^2 = \eta_{\mu \nu} dx^{\mu} dx^{\nu}$. This metric can have a conical singularity in $R = \mu = R_h$, in addition to the ``true'' singularity in $R = 0$. Upon defining the new coordinate $\rho$ by \begin{equation} d \rho^2 = \left[ \frac{\Lambda \mu}{2} \left( R - \mu\right)\right]^{-1} dR^2 \label{rho} \end{equation} the extradimensional part of the metric, close to the horizon, can be approximated by: \begin{equation} ds_2^2 \simeq d \rho^2 + \frac{5}{8}\beta^2 \ell^2 \mu^2 \Lambda \rho^2 d\chi^2 \label{cone_metric} \end{equation} thus we see that the metric is regular close to the horizon if \begin{equation} \beta^2 \ell^2 = \left( \frac{5}{8}\mu^2 \Lambda \right)^{-1} \label{flat_par} \end{equation} The space-time of our model consists of two manifolds described by the metric (\ref{vac_metric}), joined together at the radial position of the brane $R = R_b$: \begin{eqnarray} && ds_{6,in}^2 = z_i^2 ds_4^2 + \left[ \frac{\Lambda_i}{10}\left( z_i^2 - \frac{\mu_i^5}{z_i^3}\right) \right]^{-1} dR^2 + \beta_i^2 \ell_i^2 \left( z_i^2 - \frac{\mu_i^5}{z_i^3}\right) d\chi^2, \nonumber \\ && ds_{6,out}^2 = z_o^2 ds_4^2 + \left[ \frac{\Lambda_o}{10}\left( z_o^2 - \frac{\mu_o^5}{z_o^3} \right) \right]^{-1} dR^2 + \beta_o^2 \ell_o^2 \left( z_o^2 - \frac{\mu_o^5}{z_o^3} \right) d\chi^2. \label{in-out_metric} \end{eqnarray} where $z = R/R_0 + C$. Continuity along the brane directions requires $z_i = z_o$, while continuity along the compact extradimensional direction $\chi$ gives \begin{equation} \mu_i = \mu_o,~~~~~ \beta_i^2 \ell_i^2 = \beta_o^2 \ell_o^2 = \beta^2 \ell^2. \label{cont_par} \end{equation} Then we can, without any loss of generality, set the integration constants $C = \mu$, so that the space-time ``begins'' at $R = 0$, and $R_0 = 1$ (a different choice would just end up in a rescaling of the cosmological constant). So the two parts of the space-time differ only because of the different values of the cosmological constants. Having in mind to study mirage cosmology on the brane by allowing it to move trough the radial direction, we can impose the relation (\ref{flat_par}) only for the $in$ part of the space-time, so that it ends smoothly at the position of the horizon. On the other side, the $out$ part of the space-time is allowed to have a deficit angle $1-b$, so that the codimension metric written using the variable $\rho$ of (\ref{rho}) reads: \begin{equation} ds^2_{2,out} = d\rho^2 + (1-b)^2 \rho^2 d\chi^2. \label{cone_metric2} \end{equation} This fixes the relation between the cosmological constants of the $in$ and $out$ parts of the space-time to be: \begin{equation} \Lambda_i = \frac{\Lambda_o}{(1-b)^2} \label{cc_rel} \end{equation} \section{Cosmological equations on the brane} \label{junction_sec} The presence of matter on the brane, and the movement of the brane itself across the extra-dimension would in principle modify the bulk geometry. Solving exactly this problem would be extremely complicated, so we assume that cosmology is induced on the brane by implementing time-dependent Israel junction condition, while the bulk is not modified by the brane movement \cite{Papantonopoulos:2007fk,Minamitsuji:2007fx}. Let us assume then that the brane position is $R_b \equiv a(\tau)$. The brane embedding is thus described by the relation between the bulk and the brane coordinates $\xi^a = (\tau,{\bf x},\chi)$: \begin{equation} t = t(\tau),~~~~~~ R = a(\tau) \label{embedding} \end{equation} the other relations being just identities. So the tangent vectors are trivial, except the timelike one $u_t^A$, which reads, in the coordinate system we are using: \begin{equation} u_t^A = \left(\dot{t},{\bf 0},\dot{a},0 \right) \label{t_vect} \end{equation} where dot indicates derivative with respect to $\tau$. The normal vector $n^A$ can be expressed as \begin{equation} n^A = \left(n^t,{\bf 0},n^R,0 \right). \label{n_vect} \end{equation} By using the orthogonality conditions $g_{AB}u^A u^B = 1$, $g_{AB}n^A n^B = -1$, $g_{AB}u^A n^B = 0$ we can express all the unknown functions in terms of the scale factor $a(\tau)$. Of course, the orthogonality conditions are different on the two sides of the brane, because of the difference between the $in$ and $out$ metric. We have \begin{eqnarray} \dot{t} = \frac{\sqrt{\dot{a}^2 + z^2 f_i(z)}}{z\sqrt{f_i(z)}}, &~~~~~ n^R = -\sqrt{\dot{a}^2 + z^2 f_i(z)}, &~~~~~ n^t = -\frac{\dot{a}}{z^2\sqrt{f_i(z)}} \nonumber \\ \dot{t} = \frac{\sqrt{\dot{a}^2 + z^2 f_o(z)}}{z\sqrt{f_o(z)}}, &~~~~~ n^R = \sqrt{\dot{a}^2 + z^2 f_o(z)}, &~~~~~ n^t = \frac{\dot{a}}{z^2\sqrt{f_o(z)}} \label{vec_values} \end{eqnarray} with \begin{equation} f_{i/o}(z) = \frac{\Lambda_{i/o}}{10}\left(1 - \frac{\mu^5}{z^5} \right). \label{f} \end{equation} the difference is due to the different value of the cosmological constants and to the different orientation of the normal unit vector in the two branches. With the normal unit vector in hands, we can calculate the induced metric $h_{AB} = g_{AB} - n_A n_B$, and consequently the intrinsic line element on the brane: \begin{equation} ds^2_5 = -d\tau^2 + z^2(\tau) {\bf dx}^2 + \beta^2 \ell^2 \left( z^2(\tau) - \frac{\mu^5}{z^3(\tau)} \right) d\chi^2 \label{intr_metric} \end{equation} which is the same on both side of the by means of (\ref{cont_par}), as expected. Then we can evaluate the extrinsic curvature $K_{AB} = h_A^{~~C} \nabla_C n_B$, noting that derivatives w.r.t. the bulk variables are expressed in terms of derivatives w.r.t. brane variables via the chain relation \cite{Papantonopoulos:2007fk} \begin{equation} \partial_A = g_{AB} \partial_\mu x^B \gamma^{\mu \nu} \partial_\nu. \label{chain_formula} \end{equation} We find \begin{eqnarray} K_{tt} &=& \pm z \sqrt{\dot{a}^2 + z^2 f_{i/o}} \left[ \frac{\ddot{a}}{z f_{i/o}} - \frac{\dot{a}f'_{i/o}}{2z^2 f^2_{i/o}} + 1 \right] \nonumber \\ K_{ij} &=& \mp z \sqrt{\dot{a}^2 + z^2 f_{i/o}} \delta_{ij} \nonumber \\ K_{RR} &=& \frac{\dot{a}^2}{z^4 f_{i/o}\left( \dot{a}^2 + f_{i/o} \right)} K_{tt} \nonumber \\ K_{Rt} &=& - \frac{\dot{a}}{z^2 \sqrt{f_{i/o}\left( \dot{a}^2 + f_{i/o} \right)}} K_{tt} \nonumber \\ K_{\chi \chi} &=& \mp \frac{8\left( 2zf_{i/o} + z^2 f'_{i/o} \right)}{\mu^2 \Lambda_{i/o}^2} \sqrt{\dot{a}^2 + z^2 f_{i/o}} \label{K} \end{eqnarray} where the upper sign refers to the $in$ side of the brane, and the prime indicates derivative w.r.t. $z$. Equations of motion on the brane are obtained by equating the discontinuity of the projected extrinsic curvature across the brane with the energy-momentum contribution on the brane (which, in our case, includes the curvature term): \begin{equation} \left[ K_{\mu \nu} \right] - \left[ K \right] \gamma_{\mu \nu} = \frac{1}{M^4} \left( T_{\mu \nu} - M_5^3 G_{\mu \nu}\right) \label{brane_eom} \end{equation} where $[x]$ stands for $x_o - x_i$, $K_{\mu \nu} = u^A_{~~\mu} u^B_{~~\nu} K_{AB}$ and $K$ is its trace, $G_{\mu \nu}$ is the intrinsic Einstein tensor as calculated from the intrinsic metric $\gamma_{\mu \nu}$. The brane energy-momentum tensor needs to be specified. We assume, in addition to the tension contribution, a ``perfect fluid-like'' form, which is compatible with the symmetry of the space-time: \begin{eqnarray} T_\mu^{~~\nu} &=& -\lambda \eta_\mu ^{~~\nu} + {}^{(p.f.)}T_\mu^{~~\nu} \nonumber \\ {}^{(p.f.)}T_\mu^{~~\nu} &=& {\rm diag}\left( -\rho,p,p,p,P\right). \label{em_tensor} \end{eqnarray} Let us stress that with the particular choice of the form of the e.m. tensor (\ref{em_tensor}), the cosmological constant on the brane is taken into account separately, so that we can impose $w > -1$. Of course this is restrictive, since the symmetry of the space-time allows an energy momentum tensor with $\rho = -p = \lambda$ and $P=-\lambda'$, thus having a sort of ``ring'' cosmological constant that can differs from the 4-dimensional one. We will comment more on this in the next section. Finally, substituting (\ref{brane_eom}) in (\ref{em_tensor}), after some algebraic manipulation, the equations of motion read (from now on, we drop the subscript $i/o$, assuming that all quantities are intended to be in the $out$ side, and use (\ref{cc_rel}) to express appropriately the correspondent $in$ objects): \begin{eqnarray} && \sqrt{H^2 + f} + \sqrt{H^2 + \sigma^2 f} = \frac{2}{M^4}\frac{1 - \left( \frac{\mu}{z} \right)^5}{8-3\left( \frac{\mu}{z} \right)^5} \left( \rho + \lambda \right) - 3 r_c \frac{4 + \left( \frac{\mu}{z} \right)^5}{8-3\left( \frac{\mu}{z} \right)^5}H^2 \label{compeq_1} \\ && \dot{\rho} + H\left( \frac{(4 + 3w) - 3(\frac{1}{2} + w)\left( \frac{\mu}{z} \right)^5} {1-\left( \frac{\mu}{z} \right)^5}\rho - \frac{1 + \frac{3}{2} \left( \frac{\mu}{z} \right)^5} {1-\left( \frac{\mu}{z} \right)^5} P \right) = 0 \label{compeq_2} \\ && -\dot{H}\left[ \left(H^2 + f \right)^{-\frac{1}{2}} + \left(H^2 + \sigma^2f \right)^{-\frac{1}{2}} \right] - \nonumber \\ && -H^2 \left[ \left( 5 - \frac{1 + \frac{3}{2} \left( \frac{\mu}{z} \right)^5}{1-\left( \frac{\mu}{z} \right)^5} \right) \left( \left(H^2 + f \right)^{-\frac{1}{2}} + \left(H^2 + \sigma^2f \right)^{-\frac{1}{2}} \right) + 3 \frac{\sqrt{H^2 + \sigma^2 f}} {H^2 + f} \right] - \nonumber \\ && -f \left[ 4\left(H^2 + f \right)^{-\frac{1}{2}} + \sigma^2 \left(H^2 + \sigma^2f \right)^{-\frac{1}{2}} + 3 \frac{\sqrt{H^2 + \sigma^2 f}}{H^2 + f} \right] = \nonumber \\ && = \frac{1}{M^4} \left( P - \lambda \right) + 3 r_c \left( \dot{H} + 4H^2 \right) \label{compeq_3} \end{eqnarray} with $H = \dot{z}/z$, $\sigma = (1-b)^{-1}$, $r_c = M_5^3/M^4$. Eq. (\ref{compeq_1}) represents the modified Friedman equation that controls the cosmological evolution of the 5-dimensional brane. Notice, however, that $H$ is the actual 4-dimensional Hubble parameter, since it is obtained from the scale factor that controls the dynamics of the 4-dimensional slice of the brane. Since the brane is wrapped around the azimuthal direction, in order to obtain ``sensible'' 4-dimensional sources, we must integrate the energy density $\rho$ over the fifth dimension \cite{Papantonopoulos:2007fk,Minamitsuji:2007fx}: \begin{equation} {}^{(4)} \rho = \int d\chi \sqrt{\gamma_{\chi \chi}} \rho = 2 \pi \sqrt{\gamma_{\chi \chi}} \rho \label{4-D_rho} \end{equation} since we assume that neither the metric nor the energy density depend on the azimuthal coordinate. The modified Friedman eq. (\ref{compeq_1}) could be cast in a more ``conventional'' form $H^2 = f({}^{(4)} \rho)$ by solving it for $H^2$ and inserting ${}^{(4)} \rho$, but its form would be overcomplicated, and we will show that, if one seeks for solutions only in particular regimes, ``effective'' Friedman equations will regain their simplicity. Eq. (\ref{compeq_2}) is the conservation equation for the energy-momentum tensor (\ref{em_tensor}), in which we have imposed an equation of state that relates only energy density and pressure, $p = w \rho$. The symmetry of the 5-dimensional space-time leads us to include an extra-dimensional component which is undetectable (unless one includes gauge coupling between ordinary matter and extradimensional one), but which modifies the dynamic of the 4-dimensional universe. The presence of such a ``dark'' term is quite common in braneworld models \cite{Binetruy:1999ut,Binetruy:1999hy} and in our model plays the crucial role of slowing down the expansion of the energy density (in some particular regimes) thus compensating the leakage due to higher dimensionality. Eq. (\ref{compeq_3}) is the fifth component of the 5-dimensional junction conditions, and (by means of the Bianchi identities) it is related to the fifth component of the conservation equation of the energy-momentum tensor. We assume that $P$ satisfies this equation,which is then a constraint equation for the extradimensional pressure. It is possible to imagine a more complicated scenario in which the extradimensional pressure is not assumed to satisfy a constraint equation, but (maybe more physically) to be related to the energy density by a general equation of state. in this case the system would look like being not compatible, since there would be more equations than degrees of freedom. Actually, a more general extradimensional equation of state would induce a time-dependent tilt in the azimuthal direction, which results in a deformation of the ring shape of the brane. Studying a system like this would be very complicated, and will probably not give an acceptable 4D cosmology because it would be very hard to get a FRW-like 4-dimensional slicing of the 5-brane. This is the reason we assume the brane stays rigid during its movement trough the cone. The system of eqs. (\ref{compeq_1}-\ref{compeq_3}) looks very complicated to handle. Nevertheless, it is possible, as already anticipated, to make some assumptions that allows us to simplify it, so to get an analytic solutions. This will be the aim of the next section. \section{Cosmology on the brane} \label{cosmol_sect} It is more convenient to track back the cosmological evolution of the brane from late to early times. Let's then first consider what happens at late times. We can guess (assuming that the universe is expanding) that $a(\tau) \gg \mu $ so that $f$ will become just proportional to the cosmological constant. In addition, we can assume that the energy density is negligible with respect to the cosmological constant itself\footnote{Of course, current observations suggest that we are actually living in a very special time in the evolution of our universe in which matter energy density and cosmological constant are of the same order of magnitude. We will not pursue any suggestion about the resolution of this ``coincidence'' problem here.}. At this point, the Hubble parameter $H$ will be constant as well. The cosmological equations become: \begin{eqnarray} && \sqrt{H_0^2 + \frac{\Lambda}{10}} + \sqrt{H_0^2 + \sigma^2 \frac{\Lambda}{10}} = \frac{\lambda}{4M^4} - \frac{3}{2} r_c H_0^2 \label{lateeq_1} \\ && -H_0^2 \left[ 4 \left(H_0^2 + \frac{\Lambda}{10} \right)^{-\frac{1}{2}} + 4 \left(H_0^2 + \sigma^2\frac{\Lambda}{10} \right)^{-\frac{1}{2}} + 3 \frac{\sqrt{H_0^2 + \sigma^2 \frac{\Lambda}{10}}} {H_0^2 + \frac{\Lambda}{10}} \right] - \nonumber \\ && -\frac{\Lambda}{10} \left[ 4\left(H_0^2 + \frac{\Lambda}{10} \right)^{-\frac{1}{2}} + \sigma^2 \left(H_0^2 + \sigma^2 \frac{\Lambda}{10} \right)^{-\frac{1}{2}} + 3 \frac{\sqrt{H_0^2 + \sigma^2 \frac{\Lambda}{10}}}{H_0^2 + \frac{\Lambda}{10}} \right] = \nonumber \\ && = -\frac{\lambda}{M^4} + 12 r_c H_0^2 \label{lateeq_2} \end{eqnarray} with the conservation equation trivially satisfied. These are a set of two algebraic equations in the unknown $H_0$, $\Lambda$ and $\lambda$, whose solution\footnote{Again, since the system can be cast into a set of two fourth-order equations in $H_0^2$, an analytical solution could be found, but its (very complicated) form is unimportant here.} gives the curvature in terms of the bulk cosmological constant and the brane tension, and a unavoidable fine-tuning between these last two parameters. A similar situation occurs if we set $w = -1$, so that the energy density just results in a further contribution to the brane tension. From eq. (\ref{compeq_2}) we see that the extradimensional pressure $P$ is also constant, and can be expressed in terms of the energy density. Then we eventually get eqs. (\ref{lateeq_1},\ref{lateeq_2}) again, with a rescaled brane tension. On the other side, we can assume that the energy density dominates over cosmological constant and tension, so that we can ignore the latter two. In addition, since the Hubble parameter has to be the same order of magnitude as $\rho$, we can also ignore cosmological constant when added to $H^2$. In this way eqs. (\ref{compeq_1}),(\ref{compeq_2}),(\ref{compeq_3}) greatly simplify. We then substitute the extradimensional pressure $P$ as evaluated from (\ref{compeq_3}) in (\ref{compeq_2}), so to obtained the modified Friedman equations (from now on we substitute $\rho/M^4 \rightarrow \rho$, $P/M^4 \rightarrow P$ and $\tau \rightarrow t$): \begin{eqnarray} && 2H = \frac{\rho}{4} - \frac{3}{2}r_c H^2 \label{denseq_1} \\ && \dot{\rho} + H \left[ \left(4+3w \right)\rho + 3r_c \left(\dot{H}+4H^2 \right) \right] + 2\dot{H} + 11H^2 \label{denseq_2} \end{eqnarray} Let us stress that the energy density we are considering in these equation is a 5-dimensional energy density. The observable 4-dimensional energy density is obtained by integrating over the compact direction; again, in the limit we are considering $a(\tau) \gg \mu $ it is easy to see that: \begin{equation} {}^{(4)} \rho \propto \frac{a}{\Lambda}\rho \label{4-D_rho_late} \end{equation} Equations (\ref{denseq_1}),(\ref{denseq_2}) can be further approximated by noting that the Hubble radius can be either much greater or much smaller than the crossover scale $r_c$, so we will consider the two cases separately: \begin{itemize} \item{\it Sub-crossing regime}: In this case we have $r_c H \gg 1$, so that we can discard terms not proportional to $r_c$. The equations (\ref{denseq_1}),(\ref{denseq_2}) can be exactly solved to give: \begin{eqnarray} a(t) &=& \left( \frac{t}{t_0}\right)^{\frac{5}{6(w+2)}} \label{a_sub_sol} \\ \rho(t) &=& \rho_0 \left( a(t) \right)^{-\frac{6}{5}(w+2)} \label{rho_sub_sol} \end{eqnarray} Then we can use (\ref{4-D_rho}) to express the behavior of ${}^{(4)} \rho$ with respect to the scale factor, specializing the results for the case of interest of radiation ($w=1/3$) and matter ($w = 0$). We have: \begin{eqnarray} {}^{(4)} \rho_r &=& {}^{(4)} \rho_{r,0}a^{-\frac{9}{5}} \label{subhor_rad} \\ {}^{(4)} \rho_m &=& {}^{(4)} \rho_{m,0}a^{-\frac{7}{5}} \label{subhor_mat} \end{eqnarray} \item{\it Super-crossing regime}: In this case we have $r_c H \ll 1$ so we can drop terms proportional to $r_c$ in (\ref{denseq_1}),(\ref{denseq_2}). The solutions are: \begin{eqnarray} a(t) &=& \left( \frac{t}{t_0}\right)^{\frac{10}{24w+43}} \label{a_super_sol} \\ \rho(t) &=& \rho_0 \left( a(t) \right)^{-\frac{24w+43}{10}} \label{rho_super_sol} \end{eqnarray} which become, for the 4-dimensional radiation and matter energy density: \begin{eqnarray} {}^{(4)} \rho_r &=& {}^{(4)} \rho_{r,0}a^{-\frac{41}{10}} \label{superhor_rad} \\ {}^{(4)} \rho_m &=& {}^{(4)} \rho_{m,0}a^{-\frac{33}{10}} \label{superhor_mat} \end{eqnarray} \end{itemize} To analyze the behavior of the brane at early times, we need to revert the approximation we made at the beginning of this section, and assume $a(t) \ll \mu$. After some algebra, substituting P from (\ref{compeq_3}) in (\ref{compeq_2}) as before, the modified Friedman equations can be approximated as: \begin{eqnarray} && 2H = 2\frac{a}{\mu}\rho - 3r_c H^2 \label{early_eq_1} \\ && \dot{\rho} +\frac{\mu H}{2a} \left[ \rho + \frac{3}{2}r_c \left( \dot{H} + 4H^2\right) - \frac{\mu H}{a} \right]= 0 \label{early_eq_2} \end{eqnarray} These equations cannot be solved analytically, so we must turn to numeric. The behavior of the scale factor $a(t)$, the Hubble parameter $H(t)$ and the 4-dimensional energy density ${}^{(4)} \rho(t)$ are shown in Fig. \ref{early_plots}. Notice that, with the approximation $a(t) \ll \mu$ we are using, the 4-dimensional energy density is related with the 5-dimensional one by the relation: \begin{equation} {}^{(4)} \rho \propto \frac{\sqrt{\mu a}}{\Lambda}\rho \label{4-D_rho_early} \end{equation} \begin{figure}[ht] \begin{center} \epsfig{file=complete.eps,width=16cm,height=5cm} \caption{Plots of the scale factor $a$ (1), the Hubble parameter $H$ (2) and the 4-dimensional energy density ${}^{(4)} \rho$ (3) as obtained from numerical solutions of eqs. (\ref{early_eq_1}),(\ref{early_eq_2}).} \label{early_plots} \end{center} \end{figure} \section{Comments and conclusions} \label{comm_concl} In this paper we have studied the cosmological properties of a 5-dimensional brane, described by the action (\ref{action}), assumed to be a regularization of a codimension-2 braneworld model. Cosmological evolution is governed by the movement of the brane trough the extra-dimension, while 4-dimensional cosmology is obtained by integrating over the fifth compact dimension. The cosmological equations can be specialized to describe different regimes. Close to the inner cap, evolution of the brane seem to emerge from an initial singularity, much alike what happens in standard cosmology. It is known that extradimensional contribution to the energy-momentum tensor result in an effective negative energy density \cite{Vollick:2000uf}, and codimension-1 models have been proposed \cite{Mukherji:2002ft,DeRisi:2007dn} in which this contribution dominates at early times, thus providing a non-singular brane cosmology. Eqs. (\ref{early_eq_1}), (\ref{early_eq_2}) shows that this negative contribution is actually present in our model, but is exactly cancelled by the modified dynamics of the energy density, so that even if the static space-time is non-singular in the origin, still cosmological dynamics is plagued by an initial singularity (the 4-dimensional curvature is roughly ${}^{(4)} R \propto H^2/a^{3/2}$). Next, the universe is supposed to enter in an energy-dominated phase. In the present model we assume that different sources dominates at different eras, so single contributions can be considered independently. The presence of a curvature term on the brane indicates that a crossover scale $r_c$ can be identified, which is given by the ratio between 5-dimensional and 6-dimensional gravitational coupling constants, and which should represent the scale at which extradimensional physics become effective. In the DGP scenario \cite{Dvali:2000hr}, these extradimensional effects can provide, at super-crossing scale, for a self-accelerated expansion without the necessity of a cosmological constant \cite{Deffayet:2000uy}. Here the dynamics is quite different. We find that, if energy density dominates at a scale smaller than the crossover scale, cosmology on the brane differs evidently from standard 4-dimensional cosmology. If, on the contrary, radiation and matter eras begin on a large enough scale (or, equivalently, if the crossover scale is small enough), we find (\ref{superhor_rad}), (\ref{superhor_rad}) that energy density scales with respect to scale factor with almost the same power-law as standard cosmology, differently for what happens in other examples of induced cosmology on a codimension-2 brane presented in the literature. Eventually, energy density will drop below the order of magnitude of the cosmological constant and the tension (notice that the model under discussion depends crucially on the presence of a bulk cosmological constant to be dynamically meaningful); at that point, the brane undergoes a phase of de Sitter expansion, with an effective cosmological constant given by the solution of eqs. (\ref{lateeq_1}), (\ref{lateeq_2}). These equations impose also an unavoidable fine tuning between bulk cosmological constant and brane tension. It is possible that such a dependence could arise in the process of nucleation of the brane, which is of course fully nonlinear and very difficult to describe. A hint towards this assumption comes from the observation that the effective brane tension can be rescaled by tuning the extradimensional pressure. Still, even at the level of our analysis, it seems that the self-tuning property is lost once we go beyond the static solutions, since the effective cosmological constant on the brane has a non-trivial dependence on the tension. Summing up, our investigation seems to suggest that a braneworld model embedded in a conical de Sitter bulk could have a viable cosmology, i.e. the evolution of the brane during radiation and matter dominated phase is similar to what happens in standard cosmology, but, at the level of our analysis, the self-tuning property is lost; neither extradimensional contribution are enough to address the initial singularity problem. In order to address these drawbacks, it would be important to develop a model in which brane movement could modify bulk geometry, so that the deficit angle could become in some sense ``dynamical''. This is a formidable problem, as said elsewhere, because it would require a solution of the full 6-dimensional problem, which means tackling a set of non-linear partial differential equations. We are working in this topic, though preliminary results are not encouraging. A question remains on how reliable is a ``mirage cosmology'' approximation when applied to a codimension-2 cosmological brane (though regularized). We assume that the probe brane approximation should work as long as the curvature do not blows up. In this spirit, the undesirable singularity should not be taken too seriously, also because in a realistic model the brane should emerge from a nucleation process in the bulk. However, the lack of self-tuning should be a robust prediction,if some other effect do not change the picture drastically (such as supersymmetry). Another important development would be studying of perturbations around the background we have presented. Perturbations around the static solution give, as stated in section \ref{intro}, the tensorial structure of 4-dimensional gravity, which has an unwanted scalar degree of freedom that propagate on the brane. It is conjectured that this degree of freedom wold be reabsorbed in a nonlinear realization of our codimension-2 model. Perturbations would allow us to identify the exact form of the tree-level graviton exchange between two probe masses in 4-dimensional gravity, which, since 4-dimensional physics is obtained by integrating a time-varying azimuthal dimension, would imply a time-varying Planck mass and observational signatures that would change in different cosmological epoch as well. This could results in a tight constraint on the parameters of the model (and of other regularized codimension-2 brane models) from space-based experiments. Unfortunately, it is not clear how to obtain reliable perturbation equations starting from mirage cosmology; so in order to perform these very interesting investigation, one is again led to the necessity of a full 6-dimensional nonlinear study of brane cosmology. A simpler, and yet interesting development would be to study the supersymmetric extension of the present model, which could help to improve the fine-tuning problem between the brane tension and the bulk cosmological constant with a suitable dilaton potential. All these aspects will be addressed in forthcoming works. \section*{Acknowledgements} It is a pleasure to thank Antonio Cardoso, Olindo Corradini, Maurizio Gasperini, Kazuya Koyama, Roy Maartens, Antonios Papazoglou, Fabio Silva and David Wands for helpful discussion and comments on the manuscript \newpage
0912.0154
\section{Introduction} Quantum dot spin qubits are among the most promising and most intensively investigated building blocks of possible future solid state quantum computation systems \cite{LossDi98,Hanson07}. One of the major limitations of the decoherence time of the confined electron spin is its interaction with surrounding nuclear spins by means of hyperfine interaction \cite{KhaLossGla02,KhaLossGla03,expMarcus,Koppens05,Petta05,Koppens06,Koppens08,Braun05}. For reviews the reader is referred to Refs.~\cite{SKhaLoss03,Zhang07,Klauser07,Coish09,Taylor07}. Apart from this adverse aspect, hyperfine interaction can act as a resource of quantum information processing \cite{Taylor03,SchCiGi08,SchCiGi09,ChriCiGi09, ChriCiGi07,ChriCiGi08}. For the above reasons it is of key interest to understand the hyperfine induced spin dynamics. Most of the work into this direction, for single as well as double quantum dots, has been carried out under the assumption of a strong magnetic field coupled to the central spin system. This allows for a perturbative treatment or a complete neglect of the electron-nuclear ``flip-flop'' part of the Hamiltonian, yielding great simplification \cite{KhaLossGla02, KhaLossGla03, Coish04, Coish05, Coish06, Coish08}. In the present paper we consider the case of zero magnetic field where such approximations fail, and we therefore concentrate on exact methods. In the case of a single quantum dot spin qubit the usual Hamiltonian describing hyperfine interaction with surrounding nuclei is integrable by means of Bethe ansatz as devised by Gaudin several decades ago\cite{Gaudin,John09,BorSt071,BorSt09}. In the following we shall refer to that sytem also as the Gaudin model. Nevertheless exact results are rare also here because the Bethe ansatz equations are very hard to handle. Hence there are mainly three different routes in order to gain some exact results: (i) Restriction of the initial state to the one magnon sector \cite{KhaLossGla02, KhaLossGla03}, (ii) restriction to small system sizes enabling progress via exact numerical diagonalizations \cite{SKhaLoss02,SKhaLoss03}, and (iii) restrictions to the hyperfine coupling constants \cite{BorSt07, ErbS09}. In the present paper we will follow the third route and study in detail the electron spin as well as the entanglement dynamics in a double quantum dot model with partially homogeneous couplings: The hyperfine coupling constants are chosen to be equal to each other, whereas the exchange coupling is arbitrary. Although the assumption of homogeneous hyperfine constants (being the same for each spin in the nuclear bath) is certainly a great simplification of the true physical situation, models of this type offer the opportunity to obtain exact, approximation-free results which are scarce otherwise. Moreover, such models have been the basis of several recent theoretical studies leading to concrete predictions \cite{SchCiGi08,SchCiGi09,ChriCiGi09,ChriCiGi08}. The paper is organized as follows: In Sec. \ref{model} we introduce the Hamiltonian of the hyperfine interaction and derive the spin and entanglement dynamics for homogeneous hyperfine coupling constants. In Sec. \ref{dynamics} we study the spin and entanglement dynamics for different exchange couplings and bath polarizations. For the completely homogeneous case of the exchange coupling being the same as the hyperfine couplings we find an empirical rule describing the transition from low polarization dynamics to high polarization dynamics. The latter shows a jump in the amplitude when varying the exchange coupling away from complete homogenity. This effect as well as features like the periodicity of the dynamics are explained by analyzing the level spacings and their contributions to the dynamics. In Sec. \ref{decoherence} we extract the decoherence time from the dynamics by investigating the scaling behaviour of the short time electron spin dynamics. The result turns out to be in good agreement with experimental findings. \section{Model and formalism} \label{model} The hyperfine interaction in a system of two quantum dot spin qubits is described by the Hamiltonian \begin{equation} \label{1} H= \vec{S}_1 \cdot \sum_{i=1}^N A_i^1 \vec{I}_i + \vec{S}_2 \cdot \sum_{i=1}^N A_i^2 \vec{I}_i + \Jx \vec{S}_1 \cdot \vec{S}_2 , \end{equation} where $\Jx$ denotes the exchange coupling between the two electron spins $\vec S_1$, $\vec S_2$, and $A_i^1 $, $A_i^2 $ are the coupling parameters for their hyperfine interaction with the surrounding nuclear spins $\vec I_i$. \begin{figure}[h!] \begin{flushright} \resizebox{\linewidth}{!}{ \includegraphics{evenodd1.eps}} \end{flushright} \caption{\label{Fig:evenodd1} (Color online) Spin dynamics for $\ket{\alpha_1}=\ket{\Uparrow \Downarrow}, \ket{T_+}, \ket{T_0}$ and an even number of spins. The number of down spins in the bath is $N_D=20$ in all plots, yielding polarizations $p_b\approx 5\%-30\%$. Note that the time unit is rescaled according to the number of bath spins. We see periodicity with $\pi$. For $\ket{\alpha_1}=\ket{T_0}$ and $N=58$ we count the number of local extrema on one period and find $N-2N_D+1=58-40+1=19$ as expected.} \end{figure} In a realistic quantum dot these quantities are proportional to the square modulus of the electronic wave function at the sites of the nuclei and therefore clearly spatially dependent \begin{equation} \label{cpl} A_i^{j}=A_i v \left|\psi^{j}(\vec{r}_i)\right|^2, \end{equation} where $v$ is the volume of the unit cell containing one nuclear spin and $\psi^{j}(\vec{r}_i)$ is the electronic wave function of electron $j=1,2$ at the site of $i$-th nucleus. The quantity $A_i$ denotes the hyperfine coupling strength which depends on the respective nuclear species through the nuclear gyromagnetic ratio \cite{Coish09}. It should be stressed that these can have different lengths. In a GaAs quantum dot for example all Ga and As isotopes carry the same nuclear spin $I_i=3/2$, whereas in an InAs quantum dot the In isotopes carry a nuclear spin of $I_i=9/2$ \cite{SKhaLoss03}. In any case the Hamiltonian obviously conserves the total spin $\vec{J}=\vec{S} + \vec{I}$, where $\vec{S}=\vec{S}_1 + \vec{S}_2$ and $\vec{I}=\sum_{i=1}^N \vec{I}_i$. The model to be studied in this paper now results by neglecting the spatial variation of the hyperfine coupling constants and choosing them to be equal to each other $A^1_i=A^2_i=A/N$. Variation of the exchange coupling between the two central spins $\Jx$ then gives rise to an inhomogeneity in the system. Hence the two electron spins are interacting with a common nuclear spin bath. Moreover, if small variations of the coupling constants would be included, degenerate energy levels would slightly split and give rise to a modified {\em long-time} behavior of the system. In our quantitative studies to be reported on below, however, we focus on the {\em short-time} properties where decoherence phenomena take place. Indeed, in section \ref{decoherence} we obtain realistic $T_{2}$ decoherence time scales in an almost analytical fashion. \begin{figure}[h!] \begin{flushright} \resizebox{\linewidth}{!}{ \includegraphics{evenodd2.eps}} \end{flushright} \caption{\label{Fig:evenodd2} (Color online) Spin dynamics for $\ket{\alpha_1}=\ket{\Uparrow \Downarrow}, \ket{T_+}, \ket{T_0}$ and an odd number of spins. The number of down spins in the bath is $N_D=20$ in all plots, giving polarizations $p_b \approx 2\%-30\%$. In contrast to the case of an even number of spins we see periodicity with $2 \pi$. For $\ket{\alpha_1}=\ket{\Uparrow \Downarrow}$ and $N=45$ we count the number of local extrema on half the period and find $N-2N_D+1=45-40+1=6$ as expected.} \end{figure} In consistency with the homogenous couplings we choose the length of the bath spins to be equal to each other. For simplicity we restrict the nuclear spins to $I_i=1/2$. We expect our results to be of quite general nature not strongly depeding on this choice \cite{John09}. Note that both, the square $\vec S^{2}$ of the total central spin as well as the square $\vec I^{2}$ of the total bath spin are separately conserved quantities. Considering the two electrons to interact with a common nuclear spin bath as in our model corresponds to a physical situation where the electrons are comparatively near to each other. This leads to the question whether our model is also adapted to the case of two electrons in one quantum dot, rather than in two nearby quantum dots. Assuming perfect confinement, in the former case one of the two electrons would be forced into the first excited state, which typically has a zero around the dot center. Thus, the coupling constants near the very center of the dot would clearly be different for the two electrons. Therefore our model is more suitable for the description of two electrons in two nearby quantum dots than for the case of two electrons in one dot. Let us now turn to the exact solution of our homogeneous coupling model and calculate the spin and entanglement dynamics from the eigensystem. In what follows we shall work in subspaces of a fixed eigenvalue of $J^z$. Thus, the expectation values of the $x$- and $y$-components of the central and nuclear spins vanish, and we only have to consider their $z$-components. \begin{figure}[h!] \begin{flushright} \resizebox{\linewidth}{!}{ \includegraphics{evenoddJ1.eps}} \end{flushright} \caption{\label{Fig:evenoddJ1} (Color online) Spin dynamics for $\ket{\alpha_1}=\ket{\Uparrow \Downarrow}$ and $N_D=20$, resulting in $p_b \approx 6 \% - 30 \%$. If $\Jx$ is an odd multiple of $A/2N$ we see periodicity with $2\pi$. } \end{figure} If all hyperfine couplings are equal to each other $A^1_i=A^2_i=A/N$, the Hamiltonian (\ref{1}) can be rewritten in the following way \begin{equation} \label{5} H=H_{\operatorname{hom}}+\left(\Jx-\frac{A}{N} \right)\vec{S}_1 \cdot \vec{S}_2 \end{equation} with \begin{equation} \label{2} H_{\operatorname{hom}}=\frac{A}{2N}\left( \vec{J}^2 - \vec{S}^2_1 - \vec{S}^2_2 - \vec{I}^2\right). \end{equation} Omitting the quantum numbers corresponding to a certain Clebsch-Gordan decomposition of the bath, the eigenstates are labelled by $J,m,S$ associated with the operators $\vec{J}^2, J^z, \vec{S}^2$. The two central spins couple to $S=0,1$. Hence the eigenstates of $H$ are given by triplet states $\ket{J,m,1}$, corresponding to the coupling of a spin of length one to an arbitrary spin, and a singlet state $\ket{J,m,0}$. The explicit expressions are given by (\ref{eig1}, \ref{eig2}, \ref{eig3}) in appendix A. The corresponding eigenvalues read as follows: \begin{small} \begin{subequations}\label{4} \begin{eqnarray} H \ket{I+1,m,1}&=&\left( \frac{A}{N}I+\frac{\Jx}{4} \right) \ket{I+1,m,1}\\ \label{4b} H \ket{I,m,1}&=&\left( \frac{\Jx}{4}-\frac{A}{N}\right) \ket{I,m,1}\\ \label{4c} H \ket{I-1,m,1}&=&\left(-\frac{A}{N}I+\frac{\Jx}{4}-\frac{A}{N} \right) \ket{I-1,m,1}\\ H \ket{I,m,0}&=& -\frac{3}{4}\Jx \ket{I,m,0} \end{eqnarray} \end{subequations} \end{small} Now we are ready to evaluate the time evolution of the central spins and their entanglement from the eigensystem of the Hamiltonian. We consider initial states $\ket{\alpha}$ of the form $\ket{\alpha}=\ket{\alpha_1}\ket{\alpha_2}$, where $\ket{\alpha_1}$ is an arbitrary central spin state and $\ket{\alpha_2}$ is a product of $N$ states $\ket{\uparrow},\ket{\downarrow}$. \begin{figure}[h!] \begin{flushright} \resizebox{\linewidth}{!}{ \includegraphics{evenoddJ2.eps}} \end{flushright} \caption{\label{Fig:evenoddJ2} (Color online) Spin dynamics for $\ket{\alpha_1}=(1/\sqrt{13})\left(2 \ket{\Uparrow \Downarrow}+3\ket{\Downarrow \Uparrow}\right)$ and $N_D=20$, resulting in $p_b \approx 6 \% - 30 \%$.} \end{figure} The physical significance of this choice becomes clear by rewriting the electron-nuclear coupling parts of the Hamiltonian in terms of creation and annihilation operators: \begin{equation} \label{flipflop} \vec{S}_i\vec{I}_j=\frac{1}{2}\left(S_i^+I_j^- + S_i^-I_j^+\right)+S_i^z I_j^z \end{equation} Obviously the second term does not contribute to the dynamics for initial states which are simple product states. Hence by considering initial states of the above form, we mainly study the influence of the flip-flop part on the dynamics of the system. This is exactly the part which is eliminated by considering a strong magnetic field like in Refs. \cite{KhaLossGla02, KhaLossGla03, Coish04, Coish05, Coish06, Coish08}. As the $2^N$ dimensional bath Hilbert space is spannend by the $\vec{I}^2$ eigenstates, every product state can be written in terms of these eigenstates. If $N_D \leq N/2$ is the number of down spins in the bath, it follows \begin{small} \begin{equation} \label{8} \ket{\underbrace{\downarrow \ldots \downarrow}_{N_D} \uparrow \ldots \uparrow} = \sum_{k=0}^{N_D} \sum_{\left\{S_i\right\}} c_k^{\left\{S_i\right\}} \ket{\underbrace{\frac{N}{2}-k}_{I},\frac{N}{2}-N_D,\left\{S_i\right\}}, \end{equation} \end{small} where the quantum numbers $\lbrace S_i \rbrace$ are due to a certain Clebsch-Gordan decomposition of the bath. In (\ref{8}) we assumed the first $N_D$ spins to be flipped, which is no loss of generality due to the homogeneity of the couplings. For the following discussions it is convenient to introduce the bath polarization $p_b=\left(N-2N_D \right)/N $. Using (\ref{8}) and inverting (\ref{eig1}, \ref{eig2}, \ref{eig3}), the time evolution can be calculated by writing $\ket{\alpha}$ in terms of the above eigenstates and applying the time evolution operator. Using (\ref{eig1}, \ref{eig2}, \ref{eig3}) again and tracing out the bath degrees of freedom we arrive at the reduced density matrix $\rho(t)$, which enables to evaluate the expectation value $\langle S^z_{1/2} (t) \rangle$ and the dynamics of the entanglement between the two central spins. \begin{figure}[h!] \begin{flushright} \resizebox{\linewidth}{!}{ \includegraphics{con_hom2.eps}} \end{flushright} \caption{\label{Fig:con_hom1} (Color online) Entanglement dynamics for $\ket{\alpha_1}=\ket{\Uparrow \Downarrow}$ and $N_D=20$, resulting in $p_b \approx 6 \% - 30 \%$. In the completely homogeneous case the amplitude is small even for high polarization. Generation of entanglement benefits from high polarization.} \end{figure} As a measure for the entanglement we use the concurrence \cite{Wootters97} \begin{equation} C(t)=\operatorname{max}\lbrace0,\sqrt{\lambda_1}-\sqrt{\lambda_2}-\sqrt{\lambda_3}-\sqrt{\lambda_4}\rbrace, \end{equation} where $ \lambda_i$ are the eigenvalues of the non-hermitian matrix $\rho(t) \tilde{\rho}(t)$ in decreasing order. Here $\tilde{\rho}(t)$ is given by $\left(\sigma_y \otimes \sigma_y \right)\rho^*(t) \left( \sigma_y \otimes \sigma_y \right) $, where $\rho^*(t)$ denotes the complex conjugate of $\rho(t)$. The coefficients $c_k^{\left\{S_i\right\}}$ are of course products of Clebsch-Gordan coefficients, which enter the time evolution through the quantity \begin{equation} d_k=\sum_{\lbrace S_i \rbrace}\left( c_k^{\lbrace S_i \rbrace}\right)^2 \end{equation} and usually have to be calculated numerically. The main advantage in considering $I_i=1/2$ is now that in this case a closed expression for $d_k$ can be derived \cite{BorSt07}: \begin{equation} \label{10} d_k =\frac{N_D!(N-N_D)!(N-2k+1)}{(N-k+1)!k!} \end{equation} For further details on the calculation of the time dependent reduced density matrix and the dynamical quantities derived therefrom we refer the reader to appendix B. Finally, it is a simple but remarkable difference between our one bath system with two central spins and the homogeneous Gaudin model of a single central spin \cite{SKhaLoss03,BorSt07}, that even if we choose $\ket{\alpha_2}$ as an $\vec{I}^2$ eigenstate and hence fix $k$ in (\ref{8}) to a single value, due to the higher number of eigenvalues the resulting dynamics can not be described by a single frequency. \section{Basic dynamical properties} \label{dynamics} We now give an overview over basic dynamical features of the system considered. Due to the homogeneous couplings, the dynamics of the two central spins can be read off from each other. \begin{figure}[h!] \begin{flushright} \resizebox{\linewidth}{!}{ \includegraphics{con_hom1.eps}} \end{flushright} \caption{\label{Fig:con_hom} (Color online) Entanglement dynamics for $\ket{\alpha_1}=\ket{T_+}$ and $N_D=20$, resulting in $p_b \approx 6 \% - 30 \%$. Instead of an oscillating function we see discrete peaks. Variation of the exchange coupling has no influence because $\ket{T_+}$ is an eigenstate of the central spin coupling term.} \end{figure} Hence the following discussion of the dynamics will be restricted to $\langle S^z_1(t) \rangle$. \subsection{Electron spin dynamics} In Figs. \ref{Fig:evenodd1}, \ref{Fig:evenodd2} we consider the completely homogeneous case $\Jx=A/N$ and plot the dynamics for $\ket{\alpha}=\ket{\Uparrow \Downarrow}, \ket{T_+},\ket{T_0}$ and varying polarization $p_b\approx 2\% - 30\%$. A polarization of $30 \%$ does not seem to be particularly high, but the behavior typical for high polarizations occurs indeed already at such a value. We omit the singlet case because it is an eigenstate of the system. In Fig. \ref{Fig:evenodd1} the number of spins is even, whereas in Fig. \ref{Fig:evenodd2} an odd number is chosen. Note that we measure the time $t$ in rescaled units $\hbar/(A/2N)$ depending on the number of bath spins \cite{Note1}. Similarly to the homogeneous Gaudin system \cite{SKhaLoss03,BorSt07}, from Figs. \ref{Fig:evenodd1}, \ref{Fig:evenodd2} we see that the dynamics for an even number of spins is periodic with a periodicity of $\pi$ (in rescaled time units), whereas an odd number of spins leads to a periodicity of $2 \pi$. This is the case for $\Jx$ being any integer multiple of $A/N$. These characteristics can of course be explained by analyzing the level spacings in the different situations. For example, for an even number of bath spins, all level spacings are even multiples of $A/2N$ \cite{Note1}, resulting in dynamics periodic with $\pi$. However, if the number of spins is odd, we get even and odd level spacings (in units of $A/2N$), giving a period of $2 \pi$. For the given case of completely homogeneous couplings the dynamics can be nicely characterized: The number of local extrema for an even number of bath spins within a complete period, as well as for an odd number of bath spins within half a period, is in both cases given by $N-2N_D+1$. \begin{figure}[h!] \begin{flushright} \resizebox{\linewidth}{!}{ \includegraphics{Jsubzero.eps}} \end{flushright} \caption{\label{Fig:Jsubzero} (Color online) Spin dynamics on short time scales for $\Jx \lessgtr 0$, $p_b=2/N$, and $\ket{\alpha_1}=\ket{\Uparrow \Downarrow}$. The thick solid lines mark the zero level $\langle S^z_1 \rangle=0$ while the thick dashed line (lower panel) represents the threshold level $\langle S^z_1 \rangle=0.2$ as appropriate for $\Jx<0$ and small spin baths. } \end{figure} This -- so far empirical -- rule holds for all initial central spin states and is illustrated in Figs. \ref{Fig:evenodd1} and \ref{Fig:evenodd2}. Let us now investigate the spin dynamics for varying exchange coupling, i.e. the case $\Jx\neq A/N$. Note that for the initial central spin state $\ket{\alpha_1}=\ket{T_0}$ this inhomogeneity has no influence on the spin dynamics since $\ket{T_0}$ is an eigenstate of $\vec{S}_1 \cdot \vec{S}_2$ and \begin{equation} \left[ H_{\operatorname{hom}},\vec{S}_1 \cdot \vec{S}_2 \right]=0. \end{equation} In Fig. \ref{Fig:evenoddJ1} the dynamics for $\ket{\alpha_1}=\ket{\Uparrow \Downarrow}$ and varying exchange coupling is plotted. In the upper two panels we consider the case of low polarization $p_b \approx 10\%$ for an even and an odd number of spins. The remaining two panels show the dynamics for high polarization $p_b \approx 30 \%$. In Fig. \ref{Fig:evenoddJ2} the plots are ordered likewise for a more general linear combination of $\ket{\Uparrow \Downarrow}$ and $\ket{T_0}$ , $\ket{\alpha_1}=(1/\sqrt{13})\left( 2 \ket{\Uparrow \Downarrow} + 3 \ket{\Downarrow \Uparrow} \right) $. From Figs. \ref{Fig:evenoddJ1}, \ref{Fig:evenoddJ2} we see that if the exchange coupling is an odd multiple of $A/2N$, the even-odd effect described above does not occur and we have periodicity of $2 \pi$. In both of the aforementioned situations the time evolutions are symmetric with respect to the middle of the period, which is a consequence of the invariance of the underlying Hamiltonian under time reversal. For a more general exchange coupling, the periodicity, along with the mirror symmetry, of the dynamics is broken on the above time scales. Considering the case of low polarization, neither the dynamics of initial states with a product nor the one of states with an entangled central spin state dramatically changes if $\Jx$ is varied. However, if the polarization is high, the spin is oscillating with mainly one frequency proportional to $\Jx$. \begin{figure}[h!] \begin{flushright} \resizebox{\linewidth}{!}{ \includegraphics{loglog2.eps}} \end{flushright} \caption{\label{Fig:scale} (Color online) Position of the first zero of $\langle S^z_1(t) \rangle$ for $\Jx \geq 0$, and the first intersection with the threshold level $\langle S^z_1 \rangle=0.2$ for $\Jx < 0$, on a double logarithmic scale. We choose $\ket{\alpha_1}=\ket{\Uparrow \Downarrow}$ and a polarization of $p_b=2/N \Leftrightarrow N=2N_D+2$. The curves are fitted to a power law $\propto N^\nu$ with $\nu=-0.52$ ($\Jx=(A/N)$), $\nu=-0.51$ ($\Jx=1.85(A/N)$), $\nu=-0.53$ ($\Jx=0$), $\nu=-0.51$ ($\Jx=-1.5(A/N)$), $\nu=-0.50$ ($\Jx=-1.85(A/N)$). Note that the parallel offset between the plots for $\Jx \geq 0$ and $\Jx <0$ results from the fact that the intersection with the higher threshold level happens closer to zero.} \end{figure} Furthermore the amplitude of the oscillation is larger for the case $\Jx \neq A/N$ than for the completely homogeneous case. This behaviour can be understood as follows: If the polarization is high $d_{N_D} \approx 1$, whereas $d_k \approx 0$ for $k \neq N_D$. This means that calculating the spin and entanglement dynamics, we only have to consider the term $k=N_D$. An evaluation of the coeffcients for the different frequencies now shows that the main contribution results from $E_{T_0}-E_S = (A/N)-\Jx$ in obvious notation. Hence if the polarization is more and more increased, this is the only frequency left. If $\Jx=(A/N)$, the two associated eigenstates are degenerate so that in this case the main contribution to the dynamics is constant. This explains why the amplitude of the high polarization dynamics in Figs. \ref{Fig:evenoddJ1}, \ref{Fig:evenoddJ2} is big compared to the one in Figs. \ref{Fig:evenodd1}, \ref{Fig:evenodd2}. For further details the reader is referred to appendix B. \subsection{Entanglement dynamics} In Figs. \ref{Fig:con_hom1}, \ref{Fig:con_hom} the concurrence dynamics $C(t)$ for $\ket{\alpha_1}=\ket{\Uparrow \Downarrow}, \ket{T_+}$ is plotted for the same polarizations as in Figs. \ref{Fig:evenoddJ1}, \ref{Fig:evenoddJ2} and varying exchange coupling. It is interesting that in the second case the concurrence drops to zero for certain periods of time. This is very similar for the case $\ket{\alpha_1}=\ket{T_0}$ not shown above. As already explained concerning the spin dynamics, the exchange coupling $\Jx$ of course has no influence because $\ket{T_+}$ is an eigenstate of $\vec{S}_1 \cdot \vec{S}_2$. It is an interesting fact now that for $\ket{\alpha_1}=\ket{\Uparrow \Downarrow}$ and a small polarization changing from $\vert \Jx \vert > 1$ to $\vert \Jx \vert <1$ increases the maximum value of the function $C(t)$. Furthermore we see from Fig. \ref{Fig:con_hom1} that surprisingly the entanglement is much smaller for the completely homogeneous case $\Jx = A/N$ than for $\Jx \neq A/N$ even for low polarization. \section{Decoherence and its quantification} \label{decoherence} Depending on the choice of the exchange coupling, the dynamics of the one bath model can either be symmetric and periodic or without any regularities. It is now not entirely obvious to determine in how far these dynamics constitute a process of decoherence. Considering for example the spin dynamics for an integer $\Jx$ and an even number of bath spins shown in Fig. \ref{Fig:evenodd1}, one can either regard the decay of the spin as decoherence or, especially due to the symmetry of the function, as part of a simple periodic motion. In Ref.~\cite{BorSt07} the first zero of $\langle S^z_1(t) \rangle$ has been considered as a measure for the decoherence time. In Fig. \ref{Fig:Jsubzero} we illustrate examples of the spin dynamics on short time scales for $\Jx \geq 0$, $\Jx <0$ and a varying number of bath spins. For $\Jx \geq 0$ this procedure is straightforward meaning that $\langle S^z_1(t) \rangle$ crosses the horizontal line $\langle S^z_1 \rangle=0$ before reaching its first minimum with $\langle S^z_1(t) \rangle<0$. However, for $\Jx<0$ and a sufficiently small number of bath spins, as seen from the lower panel of Fig. \ref{Fig:Jsubzero}, such a first minimum is attained before the first actual zero $\langle S^z_1(t) \rangle=0$. This first zero occurs indeed at much large times $t$ whose scaling behavior as a function of system size $N$ is clearly different from the zero positions found for $\Jx \geq 0$, as we have checked in a detailed analysis. Thus, our evaluation scheme needs to be modified for $\Jx<0$. An obvious way out of this problem is to either consider large enough spin baths where such an effect does not occur, or to evaluate the intersection with alternative ``threshold level'' $\langle S^z_1 \rangle>0$. In Fig. \ref{Fig:Jsubzero} we have chosen $\langle S^z_1 \rangle=0.2$, which will be the basis of our following investigation. As a further alternative, one could also consider the position of the first minimum of $\langle S^z_1(t) \rangle$. Hence, strictly speaking, it is not per se the first zero of $\langle S^z_1(t) \rangle<0$ which is a measure for the decoherence time, but the scaling behavior of the dynamics on short time scales. Following the route described above, in Fig. \ref{Fig:scale} we plot the positions (measured in units of $\hbar/(A/2N)$) of the first zeroes of $\langle S^z_1(t) \rangle$ for $\Jx \geq 0$, and of the first intersections with the threshold level shown in Fig. \ref{Fig:Jsubzero} for $\Jx<0$, on a double logarithmic scale. We choose a weakly polarized bath $N=2N_D+2\Rightarrow p_b=2/N$, approaching the completely unpolarized case for $N\to\infty$. The absolute values of the positions for $\Jx \geq 0$ and $\Jx<0$ differ slightly from each other, which results from the fact that the intersection with the threshold level at $0.2$ happens closer to zero than with the usual threshold level $\langle S^z_1 \rangle=0$. Nevertheless, the scaling behavior is very similar in all cases, and each curve can nicely be fitted by a power law $\propto (N+2)^\nu$ with $\nu\approx -0.5$, a result similar to the one found for the homogeneous Gaudin system with only one central spin \cite{BorSt07}. In a GaAs quantum dot the electron spins usually interact with approximately $N=10^6$ nuclei. Assuming the hyperfine coupling strength to be of the order of $A=10^{-5}$eV, as realistic for GaAs quantum dots \cite{SKhaLoss03}, this results in a time scale of $Nh/(\pi A)= 1.31 \cdot 10^{-4} $s. If we now use the above scaling behaviour $1/\sqrt{N+2}$, we get a decoherence time of $131$ns, which fits quite well with the experimental data \cite{expAwschalom,Koppens05,Petta05,Koppens08}. This is an interesting result not only with respect to the validity of our model: As explained following equation (\ref{flipflop}), generally decoherence results ``directly'' from the electron-nuclear flip-flop terms and through the superposition of product states from the z terms. Above we calculate the decoherence time for $\ket{\alpha_1} =\ket{\Uparrow \Downarrow}$, where the influence of the z terms is eliminated. The fact that we are able to reproduce the decoherence times suggests that the decoherence time caused by the flip-flop terms is equal or smaller than the one resulting from the z parts of the Hamiltonian. It should be stressed that we calculate the decoherence time of an individual electron $T_2$ here. In Ref. \cite{Merkulov02} the decoherence time of an ensemble of dots $T_2^*$ has been calculated yielding $1$ns for a GaAs quantum dot with $10^5$ nuclear spins. It is now a well-known fact for the Gaudin system that the decaying part of the dynamics decreases with increasing polarization \cite{SKhaLoss03}. A numerical evaluation shows that this is also the case for two central spins. As explained in the context of Figs. \ref{Fig:evenodd1}, \ref{Fig:evenodd2}, \ref{Fig:evenoddJ1}, \ref{Fig:evenoddJ2} the oscillations of our one bath model become more and more coherent with increasing polarization. Together with the above results for the decoherence this means that, although the homogeneous couplings are a strong simplification of the physical reality, our homogeneous coupling model shows rather realistic dynamical characteristics on the relevent time scales. This is plausible because artifacts of the homogeneous couplings, like the periodic revivals, set in on longer time scales. \section{Conclusion} In conclusion we have studied in detail the hyperfine induced spin and entanglement dynamics of a model with homogeneous hyperfine coupling constants and varying exchange coupling, based on an exact analytical calculation. We found the dynamics to be periodic and symmetric for $\Jx$ being an integer multiple of $A/N$ or an odd multiple of $A/2N$, where the period depents on the number of bath spins. We explained this periodicity by analyzing the level spectrum. For $\Jx=A/N$ we found an empirical rule which charaterizes the dynamics for varying polarization. We have seen that for low polarizations the exchange coupling has no significant influence, whereas in the high polarization case the dynamics mainly consists of one single frequency proportional to $\Jx$. It is not possible to entangle the central spins completely in the setup considered in this article. Following Ref. \cite{BorSt07} we extracted the decoherence time by analyzing the scaling behaviour of the first zero. In the case of negative exchange coupling the dynamics strongly changes on short time scales and instead of the first zero we considered the intersection of the dynamics with another threshold level parallel to the time axis. Both cases yield the same result which is in good agreement with experimental data. Hence the scaling behaviour of the short time dynamics can be regarded as a good indicator for the decoherence time. \newline \acknowledgments This work was supported by DFG program SFB631. J.~S. acknowledges the hospitality of the Kavli Institute for Theoretical Physics at the University of California at Santa Barbara, where this work was reaching completion and was therefore supported in part by the National Science Foundation under Grant No. PHY05-51164.
1603.08921
\section{Introduction} \label{intro} Super-asymptotic giant branch (SAGB) stars are characterized by the development of a degenerate carbon-oxygen (CO) core and the subsequent ignition of off-center carbon fusion within it. Stellar evolution calculations show that this occurs in stars that have zero-age main sequence masses $\approx 7-11\,\Msun$, with this mass range depending on the metallicity and on modeling assumptions such as the mass loss rate and the efficiency of mixing at convective boundaries. Carbon ignition initially occurs as an off-center flash, but after one or more of these flashes, a self-sustaining carbon-burning front can develop \citep[see e.g.,][]{Siess06, Farmer15}. This ``flame'' propagates towards the center of the star extremely sub-sonically, as heat from the burning front is conducted inward. The heat from the burning also drives a convective zone above the burning front, and in the quasi-steady-state, the energy released by carbon fusion is balanced by energy losses via neutrino cooling in this convective zone \citep{Timmes94}. As the carbon-burning flame propagates to the center, it leaves behind oxygen-neon (ONe) ashes. This process creates the core that will become a massive ONe WD or collapse to a neutron star, powering an electron-capture supernova \citep{Miyaji80}. However, the presence of additional mixing near the flame can lead to its disruption, preventing carbon burning from reaching the center. There are at least two physical processes that may play a role in this region: (1) mixing driven by the thermohaline-unstable configuration of the hot ONe ash on top of the cooler CO fuel and (2) mixing driven by the presence of a convective zone above the flame via convective overshoot. These processes were investigated by \citet{Denissenkov13b} using 1D stellar evolution models. With a thermohaline diffusion coefficient informed by multi-dimensional hydrodynamics simulations, they concluded that thermohaline mixing was not sufficient to disrupt the flame. However, they did find that the introduction of sufficient convective boundary mixing---using a model of exponential overshooting \citep{Freytag96, Herwig00}---disrupted the flame, preventing carbon burning from reaching the center. This led to the production of ``hybrid C/O/Ne'' WDs, in which a CO core is overlaid by an ONe mantle. Several groups have begun to model the explosions that would originate from objects with this configuration \citep{Denissenkov15, Kromer15, Bravo16, Willcox16}. Is mixing sufficiently vigorous to disrupt the carbon flame? This is a key question for understanding the final outcomes of SAGB stars and the WDs they produce. If the thermal diffusivity $\kappa$ is much larger than the chemical diffusivity $D$, the flame propagates into fresh fuel much more quickly than the fuel and ash can mix, allowing the flame to successfully propagate to the center of the star. We estimate $\kappa/D\sim 10^6$ using the thermal conductivity in MESA (which is drawn from \citealt{Cassisi07}) and a chemical diffusivity from \citet{Beznogov14}. However, convective mixing could produce a {\it turbulent} diffusivity $\Dturb$, which if similar to $\kappa$, could mix ash into the fuel, stalling the flame, as was found in \citet{Denissenkov13b}. In this paper, we present 3D simulations of an idealized model of a convectively-bounded carbon flame. These simulations allow us to measure the enhanced mixing due to convective overshoot, and to determine if $\Dturb>\kappa$ within the flame. Section~\ref{sec:carb-flame-prop} summarizes the properties of carbon flames, which we use to motivate the problem setup presented in Section~\ref{sec:prob-setup}. Section~\ref{sec:results} presents the results of our simulations and we discuss their implications in Section~\ref{sec:conclusions}. \section{Carbon Flame Properties} \label{sec:carb-flame-prop} To obtain an example of the structure of a carbon flame, we evolve a star with zero-age main sequence mass of 9.5 $\Msun$ using revision 6794 of the \textsc{MESA}\ stellar evolution code\footnote{\textsc{MESA}\ is available at \url{http://mesa.sourceforge.net/}.} \citep{Paxton11, Paxton13, Paxton15}. We used the publicly available inlists of \cite{Farmer15}, who undertook a systematic study of carbon flames in SAGB stars. We did not include the effects of overshoot at the convective boundaries, but did include the effects of thermohaline mixing. The Brunt-V\"{a}is\"{a}l\"{a} (buoyancy) frequency profile of the carbon flame is shown by the blue line in Fig.~\ref{fig:buoyancy frequency}. The thermal component dominates the buoyancy frequency. The much smaller compositional component is destabilizing, but \citet{Denissenkov13b} found thermohaline mixing to not affect flame propagation. The flame structure in Fig.~\ref{fig:buoyancy frequency} is similar to that shown in Figure~3 of \citet{Denissenkov13b}. The peak of the buoyancy frequency profile shown in Fig.~\ref{fig:buoyancy frequency} is at a Lagrangian mass coordinate of $M_r = 0.13\,\Msun$. The properties of the flame change as it propagates, but the following numbers are representative throughout the evolution. The inward flame velocity is $u = \unit[9\times10^{-4}]{cm\,s^{-1}}$; it will take $\sim \unit[10^{4}]{yr}$ to reach the center. The flame width, $\delta$, measured in terms of pressure scale height, $H = \unit[2\times10^8]{cm}$, is $\delta / H \approx 0.03$. The timescale for the flame to cross itself, $t_{\mathrm{cross}} = \delta / u \approx \unit[200]{yr}$, which is also the timescale for the nuclear burning to occur. The convection zone above the flame has a radial extent of about one pressure scale height and a convective turnover timescale of a few hours. This implies that there are $\sim 10^5$ convective turnover times in the time it takes flame to cross itself. Thus, over the relatively smaller number of convective turnover times covered by our simulations, $\sim 10^2$, the flame is effectively stationary, allowing us to exclude nuclear reactions in our model. We note that our stationarity assumption is not universally applicable. Convectively bounded oxygen-neon-burning flames, which can also occur in the late evolution of stars in this mass range are thinner, $\delta \sim \unit[10^3]{cm}$, and have higher velocities, $u \sim \unit[1]{cm\,s^{-1}}$, as a result of the higher energy generation rate \citep{Timmes94, Woosley15}. Consequently, the time for the flame to traverse its width may be $\lesssim 10$ convective turnover times. Thus it is difficult to anticipate how our simulations carry over to the case of oxygen-neon flames. The Mach number of the convection is $\approx 4 \times 10^{-5}$, so compressibility does not play an important role in the convection. To measure the degree of turbulence of the convection, we calculate the Rayleigh number \begin{align}\label{eqn:rayleigh} {\rm Ra} = \frac{\omega_0^2 H^4}{\nu\kappa}, \end{align} which is the ratio of convective driving to diffusive damping. The variables $\omega_0$ and $H$ represent typical convective frequencies and lengths, and $\nu$ and $\kappa$ are the kinematic viscosity and thermal diffusivity. We estimate the convection driven by a carbon flame to have ${\rm Ra} \sim 10^{24}$, using $\omega_0 \sim \unit[3\times10^{-4}]{s^{-1}}$, $H \sim \unit[2\times10^{8}]{cm}$, $\nu \sim \unit[5\times10^{-2}]{cm^2\,s^{-1}}$ \citep{Itoh83} and $\kappa \sim \unit[3 \times 10^3]{cm^2\,s^{-1}}$ \citep{Itoh87}. This large Rayleigh number means the flow is extremely turbulent. Flames maintain coherence because their thermal diffusivity is much larger than their chemical diffusivity. The ratio of these diffusivities is the Lewis number \begin{align}\label{eqn:lewis} {\rm Le} = \frac{\kappa}{D}. \end{align} For carbon flames, we estimate ${\rm Le} \sim 10^6$. \begin{figure} \includegraphics[width=\columnwidth]{frequencies.eps} \caption{The blue line shows the buoyancy frequency squared near a carbon flame from a 9.5 $\Msun$ star evolved in MESA. The red line is the buoyancy frequency squared from the Dedalus simulation R8 (very close to its initial profile, see equation~\ref{eqn:N_0^2 dedalus}). Due to computational limitations, the buoyancy frequency in the model of the carbon flame is much lower and the transition between the buoyancy peak and the convective region is much more gradual, in Dedalus than in the MESA model. These differences both act to enhance the convective mixing via overshoot in Dedalus. The inset shows the neutral buoyancy height $z_{\rm nb}$ and the bottom of the convection zone $z_0$ in the Dedalus simulation. In the MESA model, this region is not resolved, with a width $z_0 - z_{\rm nb} < 3 \times 10^{-3}H$.}\label{fig:buoyancy frequency} \end{figure} \section{Problem Setup} \label{sec:prob-setup} Our idealized simulations make a variety of assumptions to render this problem computationally tractable. We do not include nuclear reactions because the flame is effectively stationary on the convection time scale. We use the Boussinesq approximation because the Mach number of the convection is small, and the height of the convection zone is about a scale height, so we do not believe density contrasts across the convection zone will strongly alter the dynamics. \subsection{Equations, Numerics, \& Assumptions} We solve the 3D Boussinesq equations \citep{SpiegelVeronis60} using the Dedalus\footnote{Dedalus is available at \url{http://dedalus-project.org}.} pseudo-spectral code \citep{burns17}. \begin{align} \partial_t\vec{u} + \vec{\nabla} p - \nu\nabla^2\vec{u} - g T\vec{e}_z & = -\vec{u}\dot\vec{\nabla}\vec{u}, \\ \partial_t T - \kappa \nabla^2 T & = -\vec{u}\dot\vec{\nabla} T + \bar{H}, \\ \vec{\nabla}\dot\vec{u} & = 0, \end{align} where $\vec{u}$ and $p$ are the fluid velocity and pressure, respectively, $T$ is the temperature normalized to a reference value, $g$ is the gravitational acceleration, and $\vec{e}_z$ is the unit vector in the vertical direction. We neglect the compositional effects on buoyancy (and thus thermohaline mixing), and always use $\nu=\kappa$ for computational convenience. Convective overshoot is particularly sensitive to the buoyancy frequency profile \citep[e.g.,][]{brummell02}. Thus, we study convective overshoot using a buoyancy frequency profile inspired by a carbon flame. This assumes that the most important property affecting turbulent mixing of a carbon flame is its strong buoyancy stabilization. The simulations are initialized with a temperature profile $T_0(z)$ satisfying $N_0^2(z)=g {\rm d} T_0/{\rm d}z$, where \begin{align}\label{eqn:N_0^2 dedalus} N_0^2 = -\omega_0^2 +& N^2_{\rm tail} \frac{1}{2}\left[1-\tanh\left(\frac{z-z_{\rm fl}}{\Delta z_{\rm fl}}\right)\right] \nonumber \\ +& N^2_{\rm fl} \cosh\left(\frac{z-z_{\rm fl}}{\Delta z_{\rm fl}}\right)^{-2}, \end{align} where $\omega_0^2$ is a characteristic convective frequency, and we take $N^2_{\rm tail}=100\omega_0^2$, $N^2_{\rm fl}=10^4\omega_0^2$ as approximations to the MESA model. The position of the buoyancy peak (``flame'') is $z_{\rm fl}=0.9 H$ and its half-width is $\Delta z_{\rm fl}=0.05 H$, where $H$ represents a pressure scale height. We plot the time-averaged buoyancy frequency profile of simulation R8 in Fig.~\ref{fig:buoyancy frequency} with a red line. All simulations have very similar buoyancy frequency profiles, which differ from $N_0^2$ only very close to the bottom of the convection zone. We also include a heating term $\bar{H}=-\kappa \partial_z^2 T_0$ which exactly balances the diffusion of $T_0$. This maintains the buoyancy profile and convection over the course of our simulations, enforcing the stationary assumption. It is important to note that a flame with the width and thermal diffusivity used in our simulations would propagate across itself in only $10^{1-2}$ convective turnover times. This is because the thermal diffusion in the simulations is much more rapid than in a star. As a result, the stationary buoyancy peak in our simulations does not self-consistently represent a real carbon flame, whose properties would depend on the thermal diffusivity. However, in the limit in which the thermal diffusivity in the simulation approaches the thermal diffusivities realized in stars, the simulations would provide a good approximation to convective overshoot in real carbon flames. Therefore, we hold the buoyancy profile of the model ``flame'' fixed as we carry out simulations with different microphysical diffusivities. We show below that despite the need to extrapolate the simulation results, we can nonetheless draw firm conclusions about convective mixing in carbon flames. The simulations are non-dimensionalized using the pressure scale height $H$, and the initial buoyancy frequency in the convection zone $|N_0(z=2H)|=\omega_0$. These are used to define a Rayleigh number (Eqn.~\ref{eqn:rayleigh}). The limited resolution of any multi-dimensional astrophysics simulation requires diffusivities much larger than in stars, so we can only reach ${\rm Ra}=10^9\ll 10^{24}$. Our highest resolution simulation required about 3 million cpu-hours on the Pleiades supercomputer. We define the bottom of the convection zone, where $N^2=0$, to be $z_0$. We also define the height of neutral buoyancy $z_{\rm nb}$, the point at which $\langle T(z_{\rm nb})\rangle_{x,y,t}=\langle T(z_{\rm top})\rangle_{x,y,t}$, where $\langle \cdot \rangle_{x}$ denotes an average over $x$, and $z_{\rm top}$ is the top of the domain (see inset in Fig.~\ref{fig:buoyancy frequency}). Plumes emitted at the top of the convection zone become neutrally buoyant at $z_{\rm nb}$. Convective plumes cross $z_0$, but rarely pass below $z_{\rm nb}$. The convection frequency $\omega_{\rm conv}$ and the height of the convection zone $H_{\rm conv}$ are outputs of the simulation. We define $H_{\rm conv}$ using $z_0$ and \begin{align} \omega_{\rm conv} = 2\pi\frac{w_{\rm rms}}{H_{\rm conv}}, \end{align} where $w_{\rm rms}$ is the root-mean-square vertical velocity in the convection zone. We find $H_{\rm conv}\approx 0.83 H$ and $\omega_{\rm conv}\sim 0.3 \omega_0$. Simulations with higher ${\rm Ra}$ have smaller $\omega_{\rm conv}$. This is driven by the thermal equilibration of the system. In statistically steady state, the convection zone is almost isothermal, so the temperature perturbation at the bottom of the convection zone is about $-H_{\rm conv}\omega_0^2/g$. To satisfy our bottom boundary condition, the stable region has a temperature gradient of about $-H_{\rm conv}\omega_0^2/(g H_{\rm stable})$, where $H_{\rm stable}=2-H_{\rm conv}$. Because the temperature gradient in the stable region is independent of $\kappa$, the heat flux scales like $\kappa\sim{\rm Ra}^{-1/2}$. To maintain flux balance, this heat flux must be carried by the convective flux in the convection zone, which scales like $w_{\rm rms}^{3}$. Thus, we have that $w_{\rm rms}\sim \omega_{\rm conv}\sim {\rm Ra}^{-1/6}$. Plumes become neutrally buoyant at $z_{\rm nb}$, but will penetrate further due to their inertia. To measure this effect, we define an ``overshoot number'' ${\rm Ov}$, which is the ratio of inertial to buoyancy forces near $z_{\rm nb}$, \begin{align}\label{eqn:Ov} {\rm Ov} \equiv \frac{\omega_{\rm conv}^2}{N_{\rm fl}^2}\frac{\Delta z_{\rm fl}}{H}, \end{align} where we estimate the inertia of the fluid as $\sim \omega_{\rm conv}^2H$, and the buoyancy as $H^2 N_{\rm fl}^2/\Delta z_{\rm fl}$. The latter assumes the derivative of the buoyancy frequency squared near $z_{\rm nb}$ is proportional to $N_{\rm fl}^2/\Delta z_{\rm fl}$. We report ${\rm Ov}$ for our simulations in Table~\ref{tab:sims}. For comparison, we estimate real flames have ${\rm Ov}\sim 10^{-10}$, using $N_{\rm fl}^2\sim 2\times 10^8$ and $\Delta z_{\rm fl}=0.03H$. However, the buoyancy frequency profile is actually much steeper than this linear estimate, so the real ${\rm Ov}$ is likely even smaller (see Fig.~\ref{fig:buoyancy frequency}). Our chosen buoyancy profile differs from the MESA model in two important ways: (1)~the peak is at lower frequencies; and (2)~the buoyancy frequency approaches zero more gradually. This is necessary because it is difficult to resolve the fast buoyancy timescale, and sharp buoyancy gradients numerically. Both these changes lead to substantially higher ${\rm Ov}$ than we expect in real flames. Thus, we expect our simulated plumes to penetrate much further than the convective plumes driven by carbon flames. Table~\ref{tab:sims} also reports the Reynolds number, a measure of the degree of turbulence in the fluid, defined as \begin{align}\label{eqn:reynolds} {\rm Re} = \frac{w_{\rm rms}H_{\rm conv}}{\nu}. \end{align} We solve the equations in cartesian geometry ($x,y,z$), in the domain $[0,4H]^2\times [0,2H]$. The simulations are periodic in the horizontal directions, and no-slip with zero temperature perturbation at the top and bottom. All quantities are expanded in a Fourier series in the horizontal directions. In the vertical direction, quantities are independently expanded in Chebyshev polynomials over the domain $[0, 1.05H]$, and over the domain $[1.05H, 2H]$, with boundary conditions imposed at $z=1.05H$ to maintain continuity of each quantity and its first vertical derivative. An equal number of Chebyshev modes are used in each vertical sub-domain. 3/2 dealiasing is used in each direction. We use mixed implicit-explicit timestepping, where all the linear terms are treated implicitly, and the remaining terms treated explicitly. The timestep size is determined using the Courant--Friedrichs--Lewy (CFL) condition. Table~\ref{tab:sims} describes the simulations presented in this paper. \begin{table*} \caption{List of simulations. The Rayleigh and Lewis number characterize the diffusion in the simulations (see eqns.~\ref{eqn:rayleigh} \& \ref{eqn:lewis}). The resolution is the number of Fourier or Chebyshev modes used in each direction. The CFL safety factor is listed along with our choice of timestepper. The overshoot number ${\rm Ov}$ measures the ratio of inertial to buoyancy forces in the overshoot region (see eqn.~\ref{eqn:Ov}). The Reynolds number describes the degree of turbulence in the simulation (see eqn.~\ref{eqn:reynolds}). The three columns after the Reynolds number are the heights at which $\Dturb=\alpha\kappa$, where $\alpha=1$, $0.3$, or $0$. For comparison, in simulation R8, the bottom of the convection zone is $z_0=1.180$ and the height of neutral buoyancy is $z_{\rm nb}=1.116$. The last column is the overshoot length (normalized to the pressure scale height $H$), defined as the distance between the bottom of the convection zone and the location where $\Dturb=0$.} \label{tab:sims} {\centering \begin{tabular}{ccccccccccc} \hline Name & ${\rm Ra}$ & ${\rm Le}$ & Resolution & Timestepper/CFL & ${\rm Ov}$ & ${\rm Re}$ & $\Dturb=\kappa$ & $\Dturb = 0.3\kappa$ & $\Dturb=0$ & $L_{\rm ov}$\\ \hline \hline R7 & $10^7$ & $1$ & $256^3$ & RK222$^a$/1.0 & $4\times 10^{-4}$ & 150 & 1.123 & 1.097 & 1.066 & 0.111 \\ R8 & $10^8$ & $1$ & $256^3$ & RK222/1.0 & $2\times 10^{-4}$ & 329 & 1.122 & 1.102 & 1.080 & 0.101 \\ R9 & $10^9$ & $1$ & $512^3$ & SBDF2$^b$/0.4 & $1\times 10^{-4}$ & 751 & 1.122 & 1.107 & 1.091 & 0.090 \\ R7L3& $10^7$ & $10^{1/2}$ & $256^3$ & RK222/1.0 & $4\times 10^{-4}$ & 150 & 1.133 & 1.102 & 1.061 & 0.116 \\ R8L3 & $10^8$ & $10^{1/2}$ & $256^{3c}$ & RK222/1.0 & $2\times 10^{-4}$ & 329 & 1.133 & 1.109 & 1.083 & 0.098 \\ R7L10& $10^7$ & $10$ & $256^{3c}$ & RK222/1.0 & $4\times 10^{-4}$ & 150 & 1.145 & 1.111 & 1.063 & 0.114 \\ \hline \multicolumn{6}{l}{\textsuperscript{a}\footnotesize{Second order, two-stage Runge-Kutta method \citep{ascher97}}} \\ \multicolumn{6}{l}{\textsuperscript{b}\footnotesize{Second order semi-backward differencing \citep{wang08}.}} \\ \multicolumn{6}{l}{\textsuperscript{c}\footnotesize{The passive scalar field is evolved at $512^3$.}} \end{tabular} } \end{table*} \subsection{Passive Tracer Field} The goal of this work is to estimate turbulent diffusivities associated with convective overshoot. To do this, we solve for the evolution of a passive tracer field $c$ \begin{align} \partial_t c - D \nabla^2 c = - \vec{u}\dot\vec{\nabla} c. \end{align} The tracer $c$ heuristically represents the fuel concentration, and $D$ is a proxy for chemical diffusivity (and is required for numerical stability). The tracer $c$ satisfies zero flux boundary conditions on the top and bottom of the domain, so its volume integral is conserved. It is initialized with \begin{align} c = \frac{1}{2}\left[1 - \tanh\left(\frac{z-0.8H}{\Delta z_{\rm fl}}\right)\right], \end{align} which corresponds to $c=0$ in the convection zone and $c=1$ below the buoyancy peak in the stable region. \section{Results} \label{sec:results} \begin{figure} \includegraphics[width=\columnwidth]{convection.eps} \caption{Two dimensional vertical slices of the temperature perturbation field (top) and the normalized passive scalar field (bottom) in simulation R9. The color scale for $\tilde{c}$ consists of two linear maps, stitched together at $\tilde{c}\approx -0.5$ to show the small variations within the convection zone. The dashed line shows the bottom of the convection zone, $z_0$, and the solid line shows $z_{\rm nb}$ the neutral buoyancy height. The perturbations below $z_{\rm nb}$ are waves and yield negligible mixing.}\label{fig:convection} \end{figure} After several convective turnover times, the system reaches a statistically steady state. We visualize the convection in Fig.~\ref{fig:convection}, plotting 2D vertical slices of the temperature perturbation field and the normalized passive scalar field. The temperature perturbation is $T' = T - \langle T \rangle_{x,y,t}$. We normalize the passive scalar field by subtracting off the volume-average, and setting its value to 1 at the bottom boundary: \begin{equation} \tilde{c} = \left(c - \langle c\rangle_{x,y,z}\right)/\left(\langle c(z=0)\rangle_{x,y}-\langle c\rangle_{x,y,z}\right)~. \end{equation} Fig.~\ref{fig:convection} includes dashed lines at the bottom of the convection zone, $z_0$, and solid lines at the height of neutral buoyancy $z_{\rm nb}$. There is substantial convective overshoot between $z_0$ and $z_{\rm nb}$. Below $z_{\rm nb}$, the buoyancy perturbations show the long, coherent structures of internal gravity waves. These waves yield negligible mixing. \subsection{Self-Similar Solution} \begin{figure} \includegraphics[width=\columnwidth]{c_comparison.eps} \caption{Horizontal average of the passive scalar field at four times in simulation R8. $\bar{c}$ is also time-averaged around each time for $30\omega_{\rm conv}^{-1}$. The passive scalar field is attracted to the self-similar solution, $C$ (right panel and equation~\ref{eqn:self similar}). The left panel also shows the solution of the effective diffusion model (equation~\ref{eqn:diffusion model}). The 1D effective diffusion model matches the 3D simulation.}\label{fig:c-bar} \end{figure} We now study the evolution of the horizontal average of the passive scalar field, $\bar{c}\equiv \langle c\rangle_{x,y}$. After several convective turnover times, $\bar{c}$ approaches a self-similar solution. The left panel of Fig.~\ref{fig:c-bar} shows the evolution of $\bar{c}$ in simulation R8, where $t_0$ is several turnover times after the beginning of the simulation. The profiles collapse to a single curve after subtracting off the volume-average and normalizing the bottom value to unity (i.e., taking the horizontal average of $\tilde{c}$ shown in Fig.~\ref{fig:convection}). This indicates that \begin{align}\label{eqn:self similar} \bar{c}(z,t)-\langle \bar{c}\rangle_{z}\rightarrow A(t)C(z), \end{align} where $A(t)$ is an amplitude, and $C(z)$ the vertical profile in the right panel of Fig.~\ref{fig:c-bar}. Furthermore, we find that $A(t)=A_0\exp(-\lambda t)$. $C$ thus satisfies the equation \begin{align} -\lambda C - D \partial_z^2 C = - \left\langle \vec{u}\dot\vec{\nabla} \frac{c}{A} \right\rangle_{x,y,t} \end{align} We now assume that the term on the right hand side can be written as a turbulent diffusion term. This is the Fickian diffusion ansatz \citep[e.g.,][]{brandenburg09}. The equation can be rewritten as \begin{align}\label{eqn:effective diffusivity} -\lambda C = \partial_z \left[ (D + \Dturb) \partial_z C\right], \end{align} where $\Dturb(z)$ is a turbulent diffusivity profile. We can invert equation~\eqref{eqn:effective diffusivity} to solve for $\Dturb$ in terms of $\lambda$ and $C$ by integrating the equation with respect to $z$ and then dividing by $\partial_z C$. We find that $\Dturb\ll D$ in the stable region, and is large $\sim w_{\rm rms}H_{\rm conv}$ in the convection zone; the value of $\Dturb$ is not well-constrained in the convection zone, as $\partial_zC$ is very close to zero. We find that the {\it effective} diffusivity, $D+\Dturb$ is well-fit by two error functions, one which varies from zero in the convection zone to $D$ in the stable region, the other which varies from zero in the stable region to $w_{\rm rms}H_{\rm conv}$ in the convection zone. In the rest of this paper, we replace $\Dturb$ by a least-squares fit composed of these error functions. Fig.~\ref{fig:diffusions} (left panel) includes both $|\Dturb|$ (dotted black line) and the least-squares fit (yellow line) for simulation R8. \subsection{Turbulent Diffusivity Model} \begin{figure} \includegraphics[width=\columnwidth]{diffusion_model.eps} \caption{Horizontal average of a passive scalar field using the convection in simulation R8. $c$ is initialized at $t_1$ to be horizontally uniform, with the vertical profile shown here. The diffusion model equation~\eqref{eqn:diffusion model} was initialized with the same profile. The 1D effective diffusion model matches the 3D simulation over the entire simulation.}\label{fig:diffusion model} \end{figure} To show that the convection acts like a turbulent diffusivity, we solve the model equation \begin{align}\label{eqn:diffusion model} \partial_t S(z,t) = \partial_z \left[(D+\Dturb)\partial_z S(z,t)\right]. \end{align} If we initialize $S(z,t)$ with $\langle c(t=t_0)\rangle_{x,y}$ and use our fit for $\Dturb(z)$, we find that $S\approx A(t)C(z)$, as shown in Fig.~\ref{fig:c-bar}, for every simulation. As a further test of the diffusion model, we re-initialized simulation R8 with a new concentration field profile halfway through the simulation at time $t_1$. We solved equation~\eqref{eqn:diffusion model} with $S(z,t_1)=\bar{c}(t=t_1)$. Fig.~\ref{fig:diffusion model} shows that $S\approx \bar{c}$ for the remainder of the simulation. \subsection{Diffusion Profiles}\label{sec:diffusion profiles} \begin{figure} \includegraphics[width=\columnwidth]{diffusion_comparison_all.eps} \caption{Turbulent diffusivity (equation~\ref{eqn:effective diffusivity}) as a function of height in each of our simulations, both in units of the characteristic convective diffusivity (left panel), and in units of the thermal diffusivity (right panel). We plot a fit to $\Dturb$ for all simulations, and also plot $|\Dturb|$ itself in the thin dotted line for simulation R8. The dashed line shows the bottom of the convection zone, $z_0$, and the solid line shows $z_{\rm nb}$, the neutral buoyancy height. In the left panel, the height at which $\Dturb=0.3\kappa$ is marked by an asterisk---mixing can only affect flame propagation above this point. The hatched region shows the region that must be mixed in order to disrupt the flame (section~\ref{sec:flame-disruption}). Increasing ${\rm Ra}$ and/or ${\rm Le}$ causes $\Dturb$ to approach zero further away from the buoyancy peak, meaning that mixing is less significant for more realistic parameters. }\label{fig:diffusions} \end{figure} We plot the turbulent diffusion profiles $\Dturb(z)$ for each of our simulations in Fig.~\ref{fig:diffusions}, both in units of the characteristic convective diffusivity (left panel), and in units of the thermal diffusivity (right panel). In the convection zone, the diffusivity is about equal to the convective diffusivity, partially dictated by our choice of fit. The turbulent diffusivity drops from its convective value {\it within} the convection zone. This cannot be attributed to the change in the horizontal average of $w^2$ near $z_0$ \citep[similar to][]{Jones16}. Deep within the stably stratified region, the turbulent diffusivity is nearly zero. We are interested in how $\Dturb$ transitions from large values in the convection zone to small values in the stable region. In this respect, the behavior of $\Dturb/\kappa$ is very similar in all simulations (Fig.~\ref{fig:diffusions}, right panel). Heuristically, we expect mixing to play a role in the propagation of flames when $\Dturb\sim\kappa$. We find that the height at which $\Dturb=\kappa$ is almost independent of ${\rm Ra}$, but increases with ${\rm Le}$ (Table~\ref{tab:sims}), i.e., moves closer to the convective boundary and further from the ``flame.'' In section~\ref{sec:flame-disruption}, we find that a more precise criterion for flame disruption is $\Dturb\gtrsim0.3 \kappa$ in the region in which $N \gtrsim 0.1 N_{\rm fl}$. The height at which $\Dturb=0.3\kappa$ increases with both ${\rm Ra}$ (Fig.~\ref{fig:diffusions}, left panel), and ${\rm Le}$ (Table~\ref{tab:sims}), suggesting that flame disruption becomes less likely for more realistic values of ${\rm Ra}$ and ${\rm Le}$ Two common parameterizations of convective overshoot are exponential overshoot, in which the turbulent diffusivity drops exponentially with distance from the end of the convection zone \citep[e.g.,][]{Herwig00}, and an overshoot length, in which the convective diffusivity is set to zero at a length $L_{\rm ov}$ beyond the convection zone \citep[e.g.,][]{Shaviv73, Maeder75}. In all our simulations, $\Dturb$ is {\it negative} below a critical height (although the {\it effective} diffusivity $D+\Dturb$ is everywhere positive). This suggests that a good parameterization of our simulations would be an overshoot length, rather than exponential overshoot. We define the overshoot length $L_{\rm ov}$ to be the distance between the bottom of the convection zone (where $N^2=0$), and the location where $\Dturb=0$, and report it in Table~\ref{tab:sims}. All lengths in the paper, including $L_{\rm ov}$ are normalized to the pressure scaleheight $H$. Below the point at which $\Dturb=0$ the absolute value of $\Dturb$ is very small.\footnote{We cannot place strong constraints on $|\Dturb|$ when its value is very small, as its value can be influenced by some combination of: 1.~Timestepping errors due to using a low ($2^{\rm nd}$) order timestepper; or 2.~Errors in the calculation of $C(z)$ or $\lambda$ due to insufficient averaging.} The weak dependence of $\Dturb$ in the overshoot region on the diffusivities of the system suggests that the height at which $\Dturb=0.3\kappa$ and the overshoot length $L_{\rm ov}$ are determined primarily by the length scale on which the buoyancy frequency profile changes from zero to order $\omega_c$, rather than a diffusive length scale. This suggestions that the key lengthscale in the problem is $\sim z_{\rm nb}-z_0$ (see Fig.~\ref{fig:buoyancy frequency}). Indeed, the overshoot lenght $L_{\rm ov}\sim z_{nb}-z_0$ in all of our simulations (Table~\ref{tab:sims}). This is because dense plumes falling through the convection zone become much lighter than their surroundings below $z_{\rm nb}$, so they cannot penetrate much further to produce mixing within the flame. We expect the overshoot length to scale as \begin{align} L_{\rm ov} - (z_0-z_{\rm nb}) \sim {\rm Ov}^{1/3}. \end{align} Our simulations do not explore a sufficiently wide range of ${\rm Ov}$ to test this scaling. Although increasing ${\rm Ra}$ or ${\rm Le}$ further will introduce smaller eddies into simulations, we do not believe these smaller eddies will enhance mixing, as they are subject to the same buoyancy barrier as the larger plumes resolved in the simulations presented here. \subsection{Flame Disruption in MESA} \label{sec:flame-disruption} We explore the secular effects of mixing on flame propagation via a series of numerical experiments using MESA. We begin with the evolution of a $\unit[9.5]{\Msun}$ star (the same calculation discussed in Section~\ref{sec:carb-flame-prop}). We save a model when the carbon flame is at a Lagrangian mass coordinate of $\unit[0.2]{\Msun}$. We load this model in revision 8118 of MESA and use the built-in \texttt{other\_D\_mix} routine to introduce an artificial chemical diffusivity in the vicinity of the flame. We then observe whether this additional mixing affects the behavior of the flame. In the absence of additional mixing, the carbon-burning luminosity in the flame is smooth (in time) and roughly constant, with some secular variation as the flame propagates inward. We evolve the MESA models for $\approx \unit[2000]{yr}$, which is $\approx 10$ self-crossing times for the flame; in this time, the unperturbed flame propagates inwards through $\approx \unit[0.1]{\Msun}$ of material. We classify the flame as ``disrupted'' if the carbon-burning luminosity decreases significantly (by more than a factor of $\approx$ 10) or exhibits oscillatory behavior (by more than $\approx$ 10 \%). First, we set the chemical diffusivity ($D_t$) roughly equal to the convective diffusivity, $\unit[10^{12}]{cm^2\,s^{-1}}$ (which is $\sim H^2\omega$), in the region of the flame where $N<N_{\rm crit}$. This allows us to determine the region where significant mixing is required to disrupt the flame. Increasing $N_{\rm crit}$ increases the amount of material in which additional mixing occurs, similar to increasing the overshoot length scale.\footnote{However, unlike overshooting, the mixing that we introduce is not spatially tied to the convective boundary.} We find the flame is only disrupted if $N_{\rm crit} \gtrsim 0.3 N_{\rm fl}$, where $N_{\rm fl}$ is the peak of the buoyancy frequency. This reflects the fact that it is necessary to mix material in the region where the bulk of the nuclear energy release is occurring in order to disrupt the flame. Second, we set the chemical diffusivity to be a constant factor times the thermal diffusivity over a region where $N<N_{\rm crit}$. This allows us to determine the ratio $\Dturb/\kappa$ needed to disrupt a flame. In terms of the opacity $\kappa_\star$, the thermal diffusivity is given by \begin{equation} \kappa = \frac{4 a c T^3}{\kappa_{\star} \rho^2 c_{\mathrm{P}}} \end{equation} where $a$ is the radiation constant, $c$ the speed of light, $T$ the temperature, $\rho$ the density, and $c_{\mathrm{P}}$ the specific heat at constant pressure. For a value of $N_{\rm crit} = 0.3 N_{\rm fl}$, we find that the flame is only disrupted if $D_t > 0.3 \kappa$. This agrees with our heuristic that $\Dturb\sim \kappa$ is necessary for flame disruption. If the mixing is allowed to be even deeper into the flame (higher $N_{\rm crit}$), lower diffusivities are required; however, because our simulations suggest the turbulent diffusivity drops off very sharply with depth, we believe the most germane requirement for flame disruption is that from the shallowest mixing. We use the criteria derived from these MESA calculations to interpret the results of our Dedalus simulations. The Dedalus simulations address where and how efficiently convection mixes material in the presence of a buoyancy barrier. However, because they do not self-consistently model a conductively-propagating flame, they cannot directly answer the question of whether a flame disrupts. The MESA calculations directly address whether convective mixing with a specific efficiency (relative to $\kappa$) and at a specific location (relative to $N$) is sufficient to disrupt a flame. We show these criteria in Fig.~\ref{fig:diffusions}: the region where $N > 0.3 N_{\rm fl}$ is hatched and the points where $\Dturb = 0.3\kappa$ are marked with stars. In all our Dedalus simulations, the stars are outside the hatched region, which implies that the mixing observed in Dedalus would not be sufficient to disrupt the flame. \section{Conclusions} \label{sec:conclusions} This paper describes simulations of an idealized model of convectively bounded carbon flames. The simulations are in the Boussinesq approximation, and assume a Brunt-V\"{a}is\"{a}l\"{a} frequency profile motivated by MESA simulations of carbon flames (Fig.~\ref{fig:buoyancy frequency}). On the convective timescale, carbon flames are almost stationary, so we do not explicitly include any nuclear burning in our model. The simulations evolve a passive scalar field which heuristically represents the carbon species fraction. Overshooting plumes mix the passive scalar into the convection zone. The passive scalar field quickly approaches a self-similar solution (equation~\ref{eqn:self similar}; see Fig.~\ref{fig:c-bar}), allowing us to calculate an effective diffusivity profile $\Dturb(z)$. The horizontally averaged 3D evolution of the passive scalar field is very well approximated by the solution of a 1D diffusion equation (equation~\ref{eqn:diffusion model}; see Fig.~\ref{fig:diffusion model}). Our simulations have large diffusivities compared to real stars. Despite the unphysical parameter regime of our simulations, we believe that we can still draw strong conclusions about mixing in real carbon flames, because of the clear trends in the simulation results as the parameters become more realistic, i.e., with increasing Rayleigh and Lewis numbers. Carbon flames have $\kappa/D \sim 10^6$, but convective mixing can stall a flame if the turbulent mixing due to overshoot is such that $\Dturb\sim\kappa$ within the flame. Overshoot in 1D stellar models is sometimes modeled by exponentially decreasing the diffusion coefficient outside the convection zone over a characteristic length \citep[e.g.,][]{Herwig00}. This parameterization does not in fact apply to our simulations, which have turbulent diffusivities which decrease as Gaussians, and then become negative below a critical height (Sec.~\ref{sec:diffusion profiles}). This suggests that a more useful parameterization is an overshoot length, as we find no convective mixing below a critical height. MESA calculations suggest that a region near the peak of the buoyancy frequency ($N\sim 0.3N_{\rm fl}$) must be mixed with $\Dturb>0.3 \kappa$ in order to disrupt the flame (Sec.~\ref{sec:flame-disruption}). None of our simulations of convective overshoot show any convective mixing in this region. In all of our simulations, the height at which $\Dturb=0.3 \kappa$ is well outside the region near the peak of the buoyancy frequency that MESA simulations show must be mixed in order to stall the flame (Fig.~\ref{fig:diffusions}). Moreover, this height shifts closer and closer to the convection zone (away from the flame) as either the Rayleigh number or $\kappa/D$ (the Lewis number) increase towards more realistic values. Furthermore, our simulations greatly overestimate the mixing efficiency, as our buoyancy frequency increases only modestly with depth (Fig.~\ref{fig:buoyancy frequency}). Although the ratio of inertia in our convective plumes to the stabilizing buoyancy force is very small ($\sim 10^{-4}$; see Table~\ref{tab:sims}), we estimate that our simulated plumes are nonetheless more powerful than realistic plumes by a factor of at least $\sim 10^6$. Taken together, these results strongly suggest that convection provides insufficient mixing to disrupt real carbon flames. The only way out of this conclusion is to posit that for yet higher ${\rm Ra}$ or ${\rm Le}$ numbers, the trends we find in mixing with increasingly realistic parameters reverse. Although we cannot rule this out, we regard it as unlikely. Physically, the lack of mixing is due to a simple physical principle: convective plumes must overcome a huge buoyancy barrier to reach the flame. There is no reason to expect them to suddenly be able to do so at even higher ${\rm Ra}$ or ${\rm Le}$. As a result, we conclude that convection provides insufficient mixing to disrupt a carbon flame and that ``hybrid C/O/Ne'' WDs are unlikely to be a typical product of stellar evolution. We have neglected important physics in this work, including rotation, magnetism, density stratification, and nuclear burning. However, it seems difficult for these effects to overcome the potential energy barrier, so we do not believe they will change our conclusion. Internal gravity waves generated by the convection could mix the fluid via breaking. The wave amplitude increases as $\sqrt{N}$ as the waves leave the convection zone and approach the flame. Waves can break if $k_r\xi_r\sim 1$, where $\xi_r$ is the vertical displacement and $k_r$ is the vertical wavenumber. Neglecting damping, theoretical models of internal wave generation by convection \citep[e.g.,][]{lecoanet2013} claim $k_r\xi_r\sim 1$ at the peak of the buoyancy frequency, $N_{\rm fl}$. However, the waves linearly damp due to thermal diffusion (which does not lead to chemical mixing). For carbon flames, we estimate the linear damping to become important near $N_{\rm fl}$, so it is unclear if the waves would break. Furthermore, breaking waves may only mix the unburnt fuel near $N_{\rm fl}$, having little effect on flame propagation. Our simulations all have $\nu=\kappa$, but in stars, we estimate the Prandtl number ${\rm Pr}=\nu/\kappa\sim 10^{-5}$. Thus, there are small-scale motions which are isothermal, but not strongly influenced by viscosity. These motions can penetrate the buoyancy gradient in the flame, and thus are expected to enhance mixing. At a fixed ${\rm Pr}$, we expect mixing to become less efficient as ${\rm Ra}$ increases, as the length scale on which perturbations are isothermal will decrease. Thus, as ${\rm Ra}$ increases, there will be less and less energy in isothermal perturbations. More quantitatively, the largest length scale for isothermal perturbations is $\ell\sim\kappa/v_{\ell}$, where $v_{\ell}$ is typical velocity of eddies of size $\ell$. Assuming a Kolmogorov cascade with $v_{\ell}\sim \omega_0 H (\ell/H)^{1/3}$, we have $v_{\ell}\sim \unit[3\times10^{2}]{cm\,s^{-1}}$ and $\ell\sim\unit[10]{cm}$. The diffusive mixing produced by these eddies is about $\Dturb\sim\ell v_{\ell}\sim \kappa$, which is enough to disrupt the flame. However, these eddies will travel a depth $\ell\ll \delta$, and thus should not penetrate far enough into the flame to disrupt it. Future work should validate these estimates. Given the strong intermittency of convective turbulence, it is also possible that the majority of overshoot mixing may be caused by a few rare but powerful plumes. Although our study cannot rule out this possibility, we note that there are about $\sim 10^6$ convective turnover times in the lifetime of a carbon flame. This is many fewer turnover times than in other astrophysical contexts (e.g., the solar convection zone), so rare events may be less important for carbon flames. Future work should also study mixing via overshoot in oxygen-neon flames, which is important for understanding whether stars at the top of the SAGB mass range undergo Fe core collapse or electron-capture-induced ONe core collapse \citep{Jones14}. \section*{Acknowledgments} \noindent{}We thank two anonymous referees for elucidating comments. We acknowledge stimulating workshops at Sky House where these ideas germinated. D.L.~is supported by the Hertz Foundation. E.Q.~is supported in part by a Simons Investigator Award from the Simons Foundation. J.S.~is supported by NSF grant AST 12-05732. G.M.V.~acknowledges support from the Australian Research Council, project number DE140101960. J.S.O.~is supported by a Provost's Research Fellowship from Farmingdale State College. This research is funded in part by the Gordon and Betty Moore Foundation through Grant GBMF5076 to L.B.~and E.Q. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. This project was supported by NASA under TCAN grant number NNX14AB53G. This work was partially supported by the National Science Foundation under grants PHY 11-25915 and AST 12-05574.
2001.11684
\section{Conclusions \& Future Work} \label{sec:conclusion} The study results demonstrated that the dynamics-based abstract map empowers traditional robot navigation systems with symbolic navigation performance comparable to humans in unseen built environments. This level of performance was achieved by first creating a speculative spatial model imagined from symbolic descriptions of places, then iteratively refining and improving that model with symbolic observations gained by the robot as it traverses the real world. A robotics-oriented grammar, specifically designed for the task of representing the symbolic spatial information embedded in navigation cues, allowed the abstract map to seamlessly incorporate the symbols implicit in navigation cues in built environments. Consequently, a viable approach to grounding human spatial symbols was demonstrated with the abstract map by using navigation cues in a robot navigation process. From the results of the study---particularly the insights provided by the human participants---the following avenues for enhancing the abstract map are proposed: \begin{itemize} \item Gaining a deeper understanding of the differences between abstract map and human performance by testing in larger-scale environments like buildings or campuses, with varying levels of symbolic spatial information; \item Enabling the abstract map to employ negative information (the sensation of ``this doesn't seem right'' felt when cues in the environment aren't describing what is expected); and \item Using long range cue detection to guide the underlying robot navigation process as done by human participants, rather than blindly moving to the goal and accepting cue observations along the way. \end{itemize} In this paper we have presented the abstract map, a system that allows a robot to enhance its navigation process with the rich information provided by symbols. By employing a spatial model based on multi-body dynamics, and utilising the symbolic spatial information embedded in an environment's purposefully placed navigation cues, this paper has demonstrated that a robot using the abstract map is capable of performing symbolic navigation at a level comparable to human performance. The method presented allows robots to move out of seen spaces described by limited subsets of human symbols and into real world human environments like schools, hospitals, offices, and zoos; an imperative transition in realising ambitions for robots to becoming ubiquitous co-inhabitants of built environments. \section{Discussion} NOT ENTIRELY SURE ABOUT THIS SECTION TO BE HONEST? The results suggested that the human navigation process is more reliant on seeking out and leveraging symbolic spatial information, while the abstract map is still primarily influenced by observed spatial layout. \section{Background and Related Work} \label{sec:related} To understand how a robot can use symbolic spatial information to inform its navigation process, there are three relevant topics in the literature: 1) how navigation cues communicate symbolic spatial information, 2) robotic interpretations of the symbol grounding process, and 3) the use of symbols in the robot navigation process. Each of these are explored in detail in the sections below. \subsection{Symbolic Spatial Information from Navigation Cues} \label{subsec:cues} Symbols are central in every navigation cue that humans place in their built environments. The diversity of symbols employed in navigation cues is large: arrows are used for signboards; arbitrary labels exist for roads, train stations, buildings, offices, etc.; words are used to communicate spatial directions; pictorial artefacts are used in maps and sketches; and even a basic action like pointing a finger can be used to signify direction. Navigation cues use symbols for two distinct purposes: to name a location in the world, and to describe a spatial relationship between locations. When referring to locations in the real world, a linguistic symbol called a \textit{toponym} is used. Toponyms---also referred to as labels, locations, places, or spaces \cite{schulz2009spatial}---are nouns used to refer to any classification of space, typically encapsulated by some form of basic geometric structure. The geometric structure can be a point (e.g. corner of Main Street and First Street), a one-dimensional path (e.g. Main Street), a region of a two-dimensional plane (e.g. block 37 on Main Street), or a three-dimensional volume (e.g. Sciences Building) \cite{tversky2003places}. The second use of symbols in navigation cues---describing spatial relationships between locations---employs a much wider range of symbol types, with intrinsic elements of the navigation cue often playing a crucial role in the symbolic communication. For example, the arrow symbol on a directional sign requires the observer to use the location and orientation of the sign in the real world to interpret the symbol. Symbolic methods for describing spatial relationships are split into four distinct types, which are discussed in detail in the following paragraphs (examples of these are shown in Fig. \ref{fig:navigation_cues}). \subsubsection{Natural language descriptions and directions} are examples of linguistic navigation cues, which can be either spoken or written. Natural language descriptions use linguistic symbols to describe the spatial relationship between toponyms. Sequential directions additionally use ordering to break a complex path into a sequence of spatial relations. Examples of linguistic cues can be seen in Fig. \ref{subfig:cues_descriptions} and \ref{subfig:cues_directions}. Linguistic navigation cues use a set of words called \textit{spatial prepositions} \cite{tyler2003semantics}---a subset of only 80 to 100 prepositions in the entire English language \cite{landau1993whence}---to describe the spatial relationship between spaces. Examples include ``left'', ``right'', ``towards'', ``beside'', ``between'', ``past'', etc. Interpretation of spatial prepositions uses simple units of space, with no more geometric complexity required than points, containers, volumes, or units with basic axial structure (like a tree with nodes and edges) \cite{landau1993whence,van2003representing}. Spatial prepositions describe the spatial relationship of a target called the \textit{figure}, relative to a reference location called the \textit{reference object}. Fig. \ref{fig:linguistic_components} shows how prepositions and toponyms typically combine in phrases, with an included or implied \textit{context} playing an influential role in the interpretation of a spatial relationship \cite{levinson2003space}. Sequential directions for instance, assume the context is where the last step finished. \begin{figure}[t] \centering \includegraphics{./figs/linguistic_components.tikz} \vspace{-1ex} \caption{The key components of a linguistic cue: a spatial preposition describes the location of a figure with respect to a reference object, with context often aiding in interpretation (e.g. interpreting ``left of'').} \vspace{\shrinkfactor} \label{fig:linguistic_components} \end{figure} \subsubsection{Labels and signboards} are examples of locational cues, which communicate a spatial relation to an observer through their location in the environment. Label cues mark the real-world location of a toponym, whereas directional signs use arrows and approximate distances to describe a toponym's location relative to the cue's real-world location. Example locational cues can be seen in Fig. \ref{subfig:cues_labels} and \ref{subfig:cues_signs}. Locational cues, long identified as a crucial influence on human wayfinding performance \cite{weisman1981evaluating,o1991effects}, associate places in the real world to symbols embedded in the environment. The association provided is crucial in allowing a navigator to link their internal spatial concepts about the world with what they observe in the environment. \subsubsection{Sketch maps and metric maps} are examples of pictorial cues, which use the visual space in a picture to represent spatial concepts \cite{tversky2003structures}. Pictorial cues are classified by how visual space is used to communicate spatial relations. Sketch maps forgo unimportant information to focus solely on emphasising spatial relationships between places. Alternatively, metric maps express spatial relationships using geometric quantities in a to-scale picture. Examples of each type of cue, and a hybrid of both, can be seen in Fig. \ref{subfig:cues_sketch}, \ref{subfig:cues_metric}, and \ref{subfig:cues_metric_sketch} respectively. Humans find sketch maps a significantly more effective navigation cue than scaled metric maps \cite{wang2012empirical} due to the likeness of their spatial descriptions to human mental structures \cite{tversky2003structures} and approaches \cite{denis1997description}. Consequently, sketch maps can be considered similar to linguistic cues that use pictures in place of toponyms and spatial prepositions. Conversely, metric maps can be considered similar to locational cues but with the extra mental burden of conversion from the map's coordinate frame to the real world. \subsubsection{Navigational gestures} communicate spatial information through gestures, a symbolic communication performed through hand movements. Gestures come in four different types \cite{allen2003gestures}---iconics, metaphorics, deictics, and beats---with only iconics and deictics employed in navigation cues. An example iconic gesture is placing a hand in front of the other to visually support the description ``the coffee shop is in front of the building''---the hands are being used as icons for the places. Deictics are the pointing gestures used by speakers to orient the listener in referential space. A common example is the pointing gesture to communicate ``the coffee shop is over there'', as shown in Fig. \ref{subfig:cues_gesture}. Both forms of navigation gesture can be considered hand-based versions of previously discussed navigation cues. A deictic gesture is a locational cue with greater flexibility in direction than printed arrows (using a sign hanging vertically on a wall to communicate \ang{61} east of north is infeasible), and the cue provider can also move throughout the environment. Iconic gestures are indecipherable without the accompanying linguistic description, and consequently can be thought of as linguistic cues with added verbosity via hand movements. \subsection{The Symbol Grounding Problem} The core challenge in using symbolic spatial information, for both humans and robots, revolves around extracting meaning from symbols; a problem referred to as the \textit{symbol grounding problem} \cite{harnad1990symbol}. Both robots and humans represent the world through their own internal concepts. However they require a method of representing symbols in terms of their own internal concepts before they can extract real world meaning from symbols. The problem is often represented through the semiotic triangle \cite{ogden1923meaning} (see Fig. \ref{fig:semiotic_triangle}), coined by Peirce \cite{peirce1902logic}. The semiotic triangle frames symbol grounding as a combination of physical grounding and social grounding. Physical grounding is the linking of internal concepts to the real world \cite{roy2005semiotic,brooks1990elephants,vogt2002physical} whereas social grounding is linking shared concepts, like symbols, to internal concepts \cite{steels2000aibo,schulz2011lingodroids}. In the scope of mobile robotics, transforming sensor data into internal spatial models like maps and pose graphs is considered physical grounding. Once a physical grounding is established, the robot can use these internal spatial models to complete navigation tasks in the real world. Social grounding in robotics is a process which gives the robot the ability to understand and communicate in a symbolic lexicon. One example is in the emergence of symbols amongst communicating robots. Studies use activities called language games \cite{roy2005semiotic,steels2008symbol} to develop and communicate a shared semiotic symbolic lexicon amongst robot populations \cite{steels2015talking,cangelosi2001evolution,vogt2007social,schulz2011lingodroids}. All of these studies focused on attaching, communicating, and interpreting symbols already linked to robot concepts rather than imposed symbols like human language. \subsection{Use of Human Symbols in Robot Navigation} The scale of the human symbolic lexicon, and lack of a universal solution to the symbol grounding problem, makes using humans symbols in robotics a challenging task. As a result, robotic systems typically employ a restrictive subset of human symbols (e.g. only pointing gestures, data structures requiring manual annotation by humans, or limiting language to a handful of words with static interpretations). Additionally, robot navigation typically limits the application of symbols to already explored spaces. However, utilising symbols only in observed spaces misses the fundamental utility of symbols---sharing human spatial perceptions with robots to enable navigating without requiring prior perception. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{./figs/semiotic_triangle.tikz} \vspace{-1ex} \caption{The semiotic triangle {\cite{peirce1902logic}} describes symbol grounding as an outcome of both deriving internal concepts for physical objects, and linking shared symbols to these internal concepts.} \vspace{\shrinkfactor} \label{fig:semiotic_triangle} \end{figure} Approaches in the literature have advanced from requiring a human-in-the-loop, to automatically linking symbols and robot spatial models. Human-in-the-loop approaches have progressed from using humans to interpret and follow automatically generated navigation instructions \cite{sriharee2013indoor}, to robots using human-annotated semantic maps to follow complex natural language instructions like ``go to the kitchen while hugging the right wall'' \cite{fasola2013using}. Significant progress has been demonstrated in the area of semantic mapping {\cite{kostavelis2015semantic}}, with approaches for attaching symbols to maps including using discrete areas of segmented robot maps \mbox{\cite{galindo2005multi,galindo2008robot}}, linking perceptions with ontological representations \mbox{\cite{kostavelis2016robot,pronobis2012large}}, and probabilistically inferring symbolic labels from object detections (e.g. attaching the ``office'' symbol to a space because computers and desks were detected) {\cite{kollar2010toward}}. Work using probabilistic inference has culminated in the generalised grounding graph (GGG) \cite{tellex2011approaching} and extensions \cite{howard2014natural,chung2015performance}, which interprets novel commands by mapping between words in language and concrete objects, places, paths, and events in the external world. Although the progress is significant, demonstrated systems still only consider the scope of spaces already observed by a robot. Progress in unseen spaces has relied on using constrained subsets of human symbols, or on limiting how far the robot can explore outside of already seen spaces. The guarantee of sequence in route instructions has been exploited with symbol subsets ranging from pointing gestures \cite{bauer2009autonomous} and input restricted to artificial instruction sets like ``\$GO()'' and ``\$FOLLOW()'' \cite{elmogy2011multimodal}, all the way to natural language \cite{macmahon2006walk} and free-hand sketches \cite{boniardi2016autonomous}. A limited semantic vocabulary consisting of four prepositions has been used to improve existing navigation performance in observed spaces \cite{walter2013learning,hemachandra2014learning}. Extensions of the approach use a single novel instruction to find an unseen place, like finding the kitchen using ``go to the kitchen that is down the hallway'' \cite{hemachandra2015learning,duvallet2016inferring}. However, the literature offers no further progress in using symbols in navigating unseen spaces. Previous work by the authors used symbols from particular types of navigation cues to navigate unseen spaces. Firstly, an abstract map for converting locational cues between the frames of reference of a floor plan and a robot was presented \cite{schulz2015robot}. Next, we demonstrated an abstract map using structured linguistic navigation cues with limited graph-based support for navigating between different spaces \cite{talbot2016find,talbot2015reasoning,talbot2018integrating}. In this work, we expand the scope to address how an abstract map can generically employ the symbolic spatial information embedded in all types of navigation cues for robot navigation in built environments. \section{Introduction} \begin{figure}[p] \centering \input{./figs/navigation_cues_vert.tex} \caption{Examples of different types of navigation cues and symbolic methods used to describe spatial relations.} \label{fig:navigation_cues} \end{figure} Proficiently navigating through unseen urban environments is a vital part of daily life for humans, whether it be meeting in a new colleague's office, making it to the correct gate for a plane in a foreign airport, locating an apartment while on overseas holiday, finding the lion at the zoo, or even finding bananas in a new grocery store. Robots must develop the same navigation abilities which humans exhibit if they are to truly become useful co-inhabitants of built environments. Wayfinding \cite{mollerup2013wayshowing}, the human navigation process in built environments like offices or shopping centres, relies on a type of spatial cue called a \textit{navigation cue}. Navigation cues come in many forms including labels, signs, maps, planners, structural landmarks, spoken directions, and navigational gestures. A subset is shown in Fig. \ref{fig:navigation_cues}. Navigation cues embed rich spatial information, and are placed throughout an environment to aid the navigation of people who have never been there before (e.g. labels are placed on the outside of offices, floor plans and maps at main entrances, and signs at choice points in corridors or walkways). Wayshowing \cite{permollerup2005} is a set of purposeful design practices and principles that inform the placement of navigation cues so as to maximise environment navigability. Wayshowing plays a key role in the architectural design of built environments. Navigation cues provide a special class of navigation information referred to as \textit{symbolic spatial information} to convey information about the spatial structure of the world. Symbols are the backbone of human communication with simple elements such as words, phrases, pictures, arrows, and gestures employed to concisely represent spatial concepts. The conciseness in symbolic representations is achieved by omitting superfluous details, instead relying on the observer's capabilities and experiences to decode symbolically communicated concepts while deducing missing details. The nature of symbols often results in symbolic spatial information being ambiguous, or challenging to correlate with meaning in the real world \cite{landau1993whence}. Nevertheless humans can capably and effortlessly leverage the richness of symbolic spatial information to profound effect. In contrast, robots are typically oblivious to the rich spatial information available in navigation cues. Robots instead use raw, low-level sensorimotor measurements to navigate their environments. The measurements typically come in the form of either range and bearing data for surrounding obstacles \cite{durrant2006simultaneous,montemerlo2002fastslam,grisetti2007improved}, or snapshots of visual appearance \cite{milford2004ratslam}. Navigation is then performed using spatial models and algorithms representing the geometric structure of a space, with no incorporation of semantics. Robots that navigate using only low-level sensor measurements are incapable of purposeful navigation when applied in spaces previously unvisited by the robot. Such spaces are referred to in this paper as \textit{unseen spaces}. Symbolic spatial information provides an opportunity to create richer spatial models than those solely estimating geometric structure from a robot's sensor measurements. Robotic systems that use symbols to navigate are not prevalent in the literature, and those that exist carry varying restrictions. Such restrictions include requiring human-constructed spatial models \cite{fasola2013using}; inferring semantics solely from object occurrences in spaces \cite{galindo2008robot,kollar2010toward}; probabilistic models limited to seen spaces \cite{walter2013learning,tellex2011approaching}; and using limited symbol sets \cite{bauer2009autonomous,elmogy2011multimodal,boniardi2016autonomous} like pointing in unseen spaces. The utility of symbols for robot navigation in built environments remains untapped. We present a navigation system that leverages both the abstract nature of navigation symbols and traditional geometric spatial models to provide purposeful navigation in unseen built environments. The system employs a malleable spatial model called the \textit{abstract map} shown in Fig. \ref{fig:system_outline}. It allows a traditional robot navigation system to utilise the symbolic spatial information embedded in navigation cues. Our research provides the following contributions in the area of symbolic navigation for mobile robots: \begin{itemize} \item a robotics-oriented grammar, with hand-crafted clauses, used to express the spatial information communicated by navigation cues (the perceptual challenges associated with extracting symbolic spatial information from images of navigation cues are left as open research questions); \item the abstract map, a malleable spatial model used by the robot navigation system to purposefully navigate unseen spaces; \item a novel method for using a dynamic multi-body system to ``imagine'' malleable spatial models of unseen built environments; \item procedures for reconciling spatial models imagined from symbols with information received from the direct sensorimotor perceptions of the robot; and \item an open source implementation of the abstract map---available at \url{https://btalb.github.io/abstract_map/} \end{itemize} The abstract map is evaluated in a study comparing robot to human navigation performance in an unseen real built environment. We present the following findings from the study: \begin{itemize} \item a quantitative comparison of human and abstract map navigation performance, \item qualitative insights into the human navigation process, and \item suggestions as to how robot symbolic navigation systems can be improved in the future. \end{itemize} \begin{figure}[t] \centering \input{./figs/system_outline.tikz} \caption{System diagram for a navigation system using navigation cue observations to navigate an unseen space. The abstract map uses a malleable spatial model to tether spatial symbols to direct robot perceptions.} \vspace{\shrinkfactor} \label{fig:system_outline} \end{figure} The rest of the paper is organised as follows. Section \ref{sec:related} describes the use of symbols in navigation cues and robot navigation systems. The abstract map is formally defined in Section \ref{sec:abstract_map}, with the experimental procedure and results then presented in Sections \ref{sec:procedure} and \ref{sec:results} respectively. The paper concludes in Section \ref{sec:conclusion} with a discussion of the results, and suggestions for future work. \section{Results} \label{sec:results} Symbolic navigation performance with the abstract map was evaluated against human participants, with a number of qualitative insights gained about the human symbolic navigation process. This section provides a quantitative comparison of symbolic navigation performance between a robot navigation system employing the abstract map and human participants, expanded details describing one robot and one human trial, as well as a qualitative summary of insights from the human participants. Numbers in the text such as \#12 refer to the tag numbers shown in Fig. \ref{subfig:experiment_map}. \subsection{Robot Performance against a Human Benchmark} \begin{table}[t] \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{@{}lccc@{}} \toprule Symbolic Goal & Human (\si{\metre}) & Robot (\si{\metre}) & Improvement (\%) \\ \midrule Kingfisher & $34.4$ & $36.9$ & \textcolor{red!80!black}{$-7.3$} \\ Toilets & $52.1$ & $37.6$ & \textcolor{green!50!black}{$27.8$} \\ Lion & $49.4$ & $45.6$ & \textcolor{green!50!black}{$7.6$} \\ Polar bear & $57.6$ & $40.5$ & \textcolor{green!50!black}{$29.8$} \\ Anaconda & $38.7$ & $38.9$ & \textcolor{red!80!black}{$-0.6$} \\ \textbf{Overall} & $\bm{46.4}$ & $\bm{39.9}$ & \textcolor{green!50!black}{$\bm{11.5}$} \\ \bottomrule \end{tabular} \caption{Average distance travelled for the human benchmark $\bar{x}_h$, and robot trials $\bar{x}_r$ (improvement is $1-\bar{x}_r/\bar{x}_h$)} \label{tab:results} \end{table} Table \ref{tab:results} compares the mean distances travelled human and robot participants for each of the five symbolic navigation tasks. Fig. \ref{fig:experiment_results} summarises the distances travelled each of the 50 trials. Human participants travelled an average distance of \SI{46.4}{\metre} whereas the robot travelled \SI{39.9}{\metre} on average, with the abstract map guiding the robot to more efficient task completion in three of the five navigation tasks. Overall, the abstract map guided the robot to complete tasks \SI{11.5}{\percent} more efficiently than human participants, and \SI{5.3}{\percent} more efficiently with the two outlier human results removed. \subsection{Expanded Robot Result: Finding the Lion} \label{subsec:results_robot} Fig. \ref{fig:lion_robot} shows the full path taken in the shortest robot trial (\SI{44.5}{\metre}) for the symbolic navigation task to ``find the lion''. The robot near the ``Exit'', with no existing map of the world and no prior information describing the zoo. It built a metric map as it travelled through the environment using a SLAM system, using straight line navigation plans when planning in unseen spaces. Examples of the underlying SLAM system are shown in Figure \ref{fig:experiment_abstract_map}. An initial spatial model for the zoo was imagined using relational clauses extracted from the zoo hierarchy graph shown in Fig. \ref{subfig:experiment_graph}. With no symbolic spatial information describing the zoo layout, the system began moving to its imagined location for the ``Lion'' as shown in Fig. \ref{subfig:experiment_abstract_map_a}. \begin{figure}[tp] \centering \input{./figs/results_graph.tex} \caption{Human and robot performance in the experimental trials, measured in distance travelled. Single outlier results occurred in the human ``Toilets'' and ``Polar bear'' trials, as noted in the graph.} \vspace{\shrinkfactor} \label{fig:experiment_results} \end{figure} \begin{figure}[!hbp] \centering \includegraphics[width=\columnwidth]{./figs/lion_robot.png} \caption{``Find the lion'', robot trial number 5. The robot started at the circle, and found the ``Lion'' at the cross. Tags and locations observed by the robot are shown in bold.} \vspace{\shrinkfactor} \label{fig:lion_robot} \end{figure} \begin{figure*}[p] \centering \input{./figs/experiment_abstract_map.tex} \caption{Process undertaken by a robot navigation system using the abstract map to successfully navigate the robot to the ``Lion'' (see \url{https://btalb.github.io/abstract_map/} for videos of the process).} \label{fig:experiment_abstract_map} \end{figure*} While avoiding obstacles and following the path planned by the underlying navigation system, the robot continued moving through free space towards the abstract map's current imagined location for the goal (near the ``Information Desk'' in reality). The robot proceeded into the ``Zoo Foyer'' upon seeing the label, and observed the signboard in tag \#3 (see Fig. \ref{subfig:experiment_signs} for signboard contents). Using the wealth of directional information in the signboard, the abstract map was heavily refined as shown in Fig. \ref{subfig:experiment_abstract_map_b} and guided the robot left in search of the ``Lion''. Next, the robot passed tag \#4 which communicated that the ``African Safari is past the Information Desk'' and the ``Information Desk'' was to the right. The abstract map was updated with the information, but suggested a location for the ``Lion'' in between going right and straight ahead at the fork due to also having information from tag \#3 suggesting the ``African Safari'' was straight ahead. With the underlying path planning navigation system choosing to go straight as shown in Fig. \ref{subfig:experiment_abstract_map_c} (right was also chosen in other trials), the robot proceeded through the ``Bird Aviary'' and past the labels for the ``Toucan'' and ``Falcon''. Tag \#8 communicated a number of directional messages regarding the remaining birds in the ``Bird Aviary'' at the end of the junction. Importantly, it communicated that the ``Owl'' was to the right and the ``African Safari is past the Owl'', causing the abstract map's estimate to guide the robot right at the junction in search of the ``African Safari'' as shown in Fig. \ref{subfig:experiment_abstract_map_d}. After passing labels for the ``Owl'' and ``Parrot'', the robot found a label for the ``African Safari'' at tag \#14 and signboard describing the safari at tag \#15. Amongst other information, the signboard communicated that the ``Giraffe'' was directly ahead and the ``Lion was past the Giraffe'', which the was used in the abstract map to guide the robot straight ahead as seen in Fig. \ref{subfig:experiment_abstract_map_e}. A final update of the abstract map's spatial model was performed upon observing the ``Giraffe'' label, before the robot completed the symbolic navigation task by finding the label for the ``Lion'' (as seen in Fig. \ref{subfig:experiment_abstract_map_f}). \subsection{Expanded Human Result: Finding the Lion} The fourth best human participant (\SI{50.7}{\metre}) began at the same location as the robot, as shown in Fig. \ref{fig:lion_human}, and was instructed to ``find the lion''. The participant, who had never been to the environment before, was given a mobile phone with the application shown in Fig. \ref{fig:experiment_app}, and the graph of the zoo hierarchy shown in Fig. \ref{subfig:experiment_graph}. Labels for the ``Exit'', ``Ticket Office'', and ``Zoo Foyer'' were missed by the participant as they walked directly to the main signboard at tag \#3, which had directional labels for nine different locations (including most of the themed animal areas). The participant spent time processing and double checking the information before proceeding left. Next, the participant walked directly past the signboard in tag \#4 to observe the label for the ``Bird Aviary'' at tag \#5. The participant then backtracked to find the signboard in tag \#4, later commenting that seeing the ``Bird Aviary'' label was ``negative information'' that made them question their current approach. In looking for the ``African Safari'', the participant proceeded directly past the label for the ``Information Desk'' at tag \#23. A directional signboard and label for the ``African Safari'' was observed at tags \#21 and \#22 respectively, with the participant proceeding down the hallway of the ``African Safari'' while deliberately not scanning tags \#18--\#20 on the left wall. At the end of the walkway they observed the signboard from tag \#15, and then observed the ``Lion'' label after purposely walking past the label for ``Giraffe''. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{./figs/lion_human.png} \caption{``Find the lion'', human trial number 4. The human started at the circle, and found the ``Lion'' at the cross. Tags and locations observed by the participant are shown in bold.} \vspace{\shrinkfactor} \label{fig:lion_human} \end{figure} In the post-interview, the participant shared their navigation process. When queried about the skipped AprilTags, the participant described not noticing the tags in the ``Carpark'' and initially at tag \#4 as well as ``guessing'' based on the structure of the environment. Time spent at the main signboard in tag \#3 was described by the participant as ``reading it twice to try and remember it'' before describing trying ``to picture'' what the sign was communicating. To picture what the sign was communicating, the participant described ``trying to picture it as a route, or left and right [forks with a focus on] which directions to go'' while using the environment to eliminate unrealistic routes like ``requiring you to go through chairs or where people are working''. The comment suggested a symbolic navigation process involving an axial structure like a simple topological graph, but further questions revealed no more details. The trial concluded with AprilTags described as ``pretty similar'' to typical navigation cues, aside from the minimal potential for confusion in interpreting arrows relative to the phone screen rather than AprilTag placement in the real world. \subsection{Insights from Human Participants} Human participants offered a number of insights through verbalisation while completing the navigation tasks, and question responses in the post-interview. Listed below are the insights that were deemed relevant to evaluating study outcomes and understanding the human navigation process in unseen built environments: \begin{itemize} \item No participants believed using tags instead of normal navigation cues affected their navigation process (\SI{88}{\percent} said it was comparable, with the remaining participants offering no direct answer). \item The majority (\SI{76}{\percent}) of participants commented on the ``added hassle'' in reading a tag with the phone rather than simply reading a normal cue, affirming the study limited the efficacy of human navigation cue perception to a level comparable with the robot. \item Participants often went out of their way to find the next AprilTag cue (as stated by \SI{60}{\percent}), generally commenting that they ``relied on the cues or tags more than their best guess'' of where the goal could reside. The abstract map has the opposite approach to the problem: it primarily follows its imagined goal location, and updates the location with any cues it observes along the way (an approach only mentioned by \SI{12}{\percent} of participants). \item \SI{32}{\percent} of participants walked directly past cues even though they were placed in conspicuous places at eye level. Most claimed to not notice the AprilTag, suggesting that visual attention may play a part in human navigation performance. \item Participants identified a wide range of cues used for navigation besides the AprilTags, grouped under three categories: visual, environmental context, and deeper cognition (present in \SI{56}{\percent}, \SI{32}{\percent}, and \SI{16}{\percent} of responses respectively). Visual cues from the environment included walkways, physical spatial structure, and lack of typical zoo features (e.g. thematic elements in the ``Artic Frontier''). Environmental context cues included knowing which areas were out of bounds, labels being more likely to be on offices, signboards more likely at choice points, and likely segmentations of space for the zoo areas. Lastly, deeper cognitive cues were employed like matching the spaces to where it looked like there was enough room for all of the animals in the zoo hierarchy graph, and guessing the experiment designer's thought process. \item The navigation strategies employed by participants displayed a lot of variety. Strategies included trusting tags over instincts (\SI{56}{\percent}), using tags heavily at navigational choice points (\SI{40}{\percent}), deliberately walking past cues to trust instincts or guessing what was likely being communicated (\SI{24}{\percent}), exploratively wandering until feeling lost then looking for cues (\SI{12}{\percent}), using negative information in cues to rule out possible options (\SI{32}{\percent}), and applying the context of typical zoo layouts from past experiences (\SI{16}{\percent}). \item \SI{20}{\percent} of participants took paths that appeared to have no explanation, with \SI{8}{\percent} taking significant detours (the outlier results). Comments by participants suggested this was due to misunderstanding cues, failing to see locations within the physical environment, and employing intuition without any other guidance. \end{itemize} \section{Acknowledgements} This research was supported under the Australian Research Council's Discovery Projects funding scheme (project number DP140103216). We would also like to acknowledge the contributions of Ben Upcroft and Ruth Schulz to the early stages of this research. \section{The Abstract Map} \label{sec:abstract_map} Three concurrent processes are used with the abstract map to harmonise and exploit symbolic and metric spatial information. Firstly, the clauses of a robotics-oriented grammar are used to generically represent the symbolic spatial information embedded in navigation cues. Next, spatial models for unseen spaces are imagined from symbols alone using malleable interpretations of the symbolic spatial information in clauses. Finally, the malleable elements of the abstract map are adapted to reflect the real world perceptions of the robot. In this work, simulated spring dynamics are used to construct the abstract map's malleable spatial model. Each of the processes is described in detail below. \subsection{Capturing Symbolic Spatial Information from Navigation Cues} We define a robotics-oriented grammar where sets of clauses represent collections of symbolic spatial information. Clauses consist of real numbers $\mathbb{R}$, angles $\mathbb{S}^1$, reference frames, elements of the set of toponyms $P$, and elements of the set of spatial prepositions $S$. The grammar concisely describes the symbolic spatial information embedded in navigation cues. In this work we do not address the perceptual challenges associated with \textit{how} symbolic spatial information can be extracted from navigation cues, although we have conducted preliminary studies in this area \cite{schulz2015robot,lam2014text,lam2015automated}. A \textit{relational clause} describes the spatial relationship between symbolic locations and is parameterised by the function \begin{equation} \relclause{s}{p_f}{p_{r_{1 \dots n}}}{p_c} \label{eq:relational_clause} \end{equation} where $s \in S$ is the preposition used to describe the spatial relationship between the figure toponym $p_f \in P$ and one or more referent toponyms $p_{r_{1 \dots n}} \in P$, given the context toponym $p_c \in \{\varnothing, P\}$ ($\varnothing$ denotes no provided context). The conversion of a navigation cue can produce numerous clauses, with this being formally defined as the conversion to a set of clauses. For example, a set containing a single clause captures the symbolic spatial information in the linguistic cue ``Isla's office is between the entryway and printer'': \begin{equation*} \big\{\relclause[text]{between}{Isla's office}{entryway, printer}{$\varnothing$}\big\}\,. \label{eq:relational_clause_example} \end{equation*} A \textit{locational clause} links symbolic locations to locations in an environment and is parameterised by the function \begin{equation} \locclause{p}{\refframe{F}}{x}{y}{r}{\theta} \label{eq:locational_clause} \end{equation} where $p \in P$ is the toponym whose location is described as a distance of $r \in \{\varnothing,\mathbb{R}\}$ and direction of $\theta \in \{\varnothing,\mathbb{S}^1\}$ from the point $(x,y)$ in reference frame $\refframe{F}$. Here $\varnothing$ denotes that the value can be unspecified. For example an office label for ``Riko's Office'' observed at coordinates $(5.21,1.76)$ relative to the robot would be captured by the set of clauses: \begin{equation*} \big\{\locclause[text]{Riko's Office}{\refframe{W}}{5.21}{1.76}{0}{\varnothing}\big\} \label{eq:locational_clause_example} \end{equation*} where $\refframe{W}$ is the world frame of reference, $r$ is $0$ as the label specifies where a place is, and $\theta$ is unspecified. Fig. \ref{fig:grammar_examples} shows examples of how the two types of clause can be employed to capture symbolic spatial information from human navigation cues. \begin{figure}[p] \centering \input{./figs/cue_clauses.tex} \caption{Examples of using a set of clauses from the robotics-oriented grammar to capture the symbolic spatial information communicated by navigation cues.} \label{fig:grammar_examples} \end{figure} \subsection{Generating Spatial Models from Relational Clauses} \label{subsec:relational} Prepositions encoded in relational clauses symbolically describe two spatial properties: spatial layout and spatial hierarchy. Prepositions that describe spatial layout include ``left'', ``down'', ``west'', and ``beside'', whereas the prepositions ``in'', ``contains'', and ``within'' are examples describing spatial hierarchy. Descriptions of layout and hierarchy can be used to inform the imagination of plausible spatial models for unseen spaces. The example built environment shown in Fig. \ref{fig:example_environment} is used below to describe the process undertaken in creating the abstract map's malleable spatial model from relational clauses alone. Spatial models are created from the symbols in relational clauses by translating clauses into spatial artefacts that capture both the spatial suggestion and malleability inherent in symbol interpretations. Relational clauses are represented in a dynamics-based spatial model by defining toponyms as point-masses which move within a plane, and mapping spatial prepositions to instances of the simulated springs in Fig. \ref{fig:springs}. Point-mass $i$ in a system is represented by a set of parameters $\Theta_i$ and state vector $\bm{\xi}_i$. The parameter set $\Theta_i$ contains toponym $p \in P$ and a constant unit mass. The state vector, with respect to the world frame $\refframe{W}$, is defined as \begin{equation} \bm{\xi}_i = \left[\begin{array}{c}\bm{x}_i\\\bm{\dot{x}}_i\end{array}\right] \, : \, \bm{x}_i = \left[\begin{array}{c}x_i\\y_i\end{array}\right] \, : \, x_i,y_i \in \mathbb{R} \,, \label{eq:point_mass_state} \end{equation} with $\iota(p) \rightarrow i : i \in \mathbb{Z}^+$ a function mapping toponym $p$ to its index in the set of point-masses $\Theta$. Spatial prepositions are mapped to one or more simulated springs which constrain either distance, absolute direction, or relative direction between two or more toponyms. The function $\sigma(\mathbf{x},\Lambda_j)$ defines the force applied to the system's point-masses by spring $j$, which is defined by the spring parameter $\Lambda_j$. $\Lambda_j$ consists of the spring type, the toponyms the spring connects to, stiffness $K$, and either natural length $r_n \in \mathbb{R}$ or angle $\theta_n \in \mathbb{S}^1$. For instance, the preposition ``right of'' is represented by a relative angle spring shown in Fig. \ref{subfig:spring_rel} with natural angle $\theta_n = 90\si{\degree}$ between point-masses for the figure toponym and context toponym, relative to the referent toponym. The figure, referent, and context toponyms correspond to nodes A, B, and C respectively in Fig. \ref{subfig:spring_rel} for this example. The spring is given a moderate stiffness $K$ to represent that ``right of'' can apply to scopes outside of precisely orthogonal. Fig. \ref{fig:example_process} demonstrates some more example conversions from symbols to springs in the translation phase. Spatial hierarchy---when a space is inside another like ``the foyer is in B Block''---is also modelled with springs. Hierarchy suggestions are first added to an evolving directed graph of spatial hierarchy as shown at the start of the translation step in Fig. \ref{fig:example_process}. For example, ``the University contains A Block'' would add a parent-child edge to the graph from the ``University'' node to ``A Block''. Each edge $k$ of the hierarchical graph is then converted to a distance spring $\Lambda_k$ with a natural length $r_n \in \mathbb{R}$ corresponding to the typical distance between spaces at that level of the graph. A very low spring stiffness $K$ is used to reflect the sweeping assumptions made in estimating distance solely from spatial hierarchy, and the wide variance of values in reality. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{./figs/example_environment.tikz} \caption{A hypothetical university environment: ``A Block'' is in the top left, and ``B Block'' is on the bottom right.} \vspace{\shrinkfactor} \label{fig:example_environment} \end{figure} \begin{figure}[t] \centering \input{./figs/springs.tex} \caption{The springs used in imagined spatial models to represent the geometric constraints suggested by relational clauses.} \vspace{\shrinkfactor} \label{fig:springs} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{./figs/process_overview.tikz} \caption{Malleable spatial models for both spatial layout and spatial hierarchy (bottom row) are imagined for the hypothetical university environment from the respective relational clauses (top row). Models for spatial layout and hierarchy are split for illustrative purposes, but exist as a single model in the real system.} \label{fig:example_process} \end{figure*} \subsection{Imagining Spatial Models for Unseen Places using Spring-Mass Dynamics} Simulated dynamics are used to create malleable spatial models for unseen places from the spatial suggestions provided by relational clauses. A spatial model is defined as the position states of each of the $m$ point masses in a system whose state vector is \begin{equation} \mathbf{x} = \left[ \> \bm{\xi}_1, \> \bm{\xi}_2, \> \dots \> , \bm{\xi}_m \right]^\intercal \enspace . \label{eq:system_state} \end{equation} As shown in Algorithm \ref{alg:imagining}, a spatial model is imagined by performing numerical integration on a starting system state $\mathbf{x}_0$ using the system motion model $\dot{\state} = f(t,\mathbf{x})$, until a settling criteria $\zeta(\dot{\state})$ is met. The following paragraphs define the process for augmenting a spatial model with new clauses, the components of the motion model $f(t,\mathbf{x})$, and the settling criteria $\zeta(\dot{\state})$. \begin{algorithm}[t] \textbf{Input:} $\> \mathbf{x}_0, \> f()$ \begin{algorithmic}[1] \State $\mathbf{x} \gets \mathbf{x}_0$ \While{\textbf{not} $\zeta(\dot{\state})$} \State $\dot{\state} \gets f(t, \mathbf{x})$ \State $t, \mathbf{x} \gets \text{odeIntegrate}(t, \Delta{}t, \mathbf{x}, \dot{\state})$ \EndWhile \end{algorithmic} \caption{``Imagining'' a spatial model using point-masses} \label{alg:imagining} \end{algorithm} \begin{algorithm}[t] \textbf{Input:} $\> c_\text{new}, \> \Lambda, \> \Theta$ \begin{algorithmic}[1] \State $\Lambda \gets \text{clausesToSprings}(c_\text{new}) \cup \Lambda$ \State $\Theta_\text{new} \gets \text{pointMassesInSprings}(\Lambda) - \Theta$ \State $\Theta_\text{new} \gets \text{sortByTotalStiffness}(\Theta_\text{new}, \Lambda)$ \For{$p \gets \text{pointMassesToToponyms}(\Theta_\text{new})$} \State $i \gets \iota(p)$ \State $\bm{\xi}_i \gets \left[ \tilde{x}, \, \tilde{y}, \, 0, \, 0 \right]^\intercal$ \State $\mathbf{x} \gets \left[ \mathbf{x}, \, \bm{\xi}_i \right]$ \EndFor \end{algorithmic} \caption{Adding new clauses $c_\text{new}$ to a spatial model} \label{alg:add_new} \end{algorithm} A new spatial model is constructed when a set of new grammar clauses $c_\text{new}$ is received by the system, with the previous spatial model used as the starting system state $\mathbf{x}_0$. All new point-masses are first given an initial state through the iterative initialisation procedure shown in Algorithm \ref{alg:add_new}. The procedure sorts all new point-masses in descending order ranked by the total weight of constraints on their position (i.e. sum of $K$ values for all springs attached to the point-mass), then iteratively places each point-mass at the position $(\tilde{x}, \tilde{y})$ that most satisfies the constraints imposed by the natural length of all attached springs. The ordering ensures point-masses whose positions are most heavily constrained by springs are placed first when those constraints are most likely to be satisfiable. Each point-mass is given zero starting velocity when placed. After a spatial model is augmented with new clauses, a new spatial model is created using the updated system motion model. The function modelling the system motion, for a system with $m$ point-masses and $n$ springs, is defined as: \begin{equation} f(t, \mathbf{x}) = \sum_{j=1}^{n} \sigma(\mathbf{x},\Lambda_j) + \sum_{i=1}^{m} \tau(\mathbf{x},\Theta_i) + \sum_{i=1}^{m} \lambda(\mathbf{x},C(t),\Theta_i) \label{eq:system_configuration} \end{equation} where $\sigma(\mathbf{x},\Lambda_j)$ is the force applied by spring $j$ with parameters $\Lambda_j$, given system state $\mathbf{x}$. $\tau(\mathbf{x},\Theta_i)$ is the viscous friction force on point-mass $i$ with parameters $\Theta_i$, given system state $\mathbf{x}$. The viscous friction ensures a solution is reached by introducing damping motion. $\lambda(\mathbf{x},C(t),\Theta_i)$ adds an expansion force pushing point-mass $i$ away from $C(t)$, where $C(t)$ is the centre of explored mass in the robot's underlying metric map (see Fig. \ref{fig:system_outline}). The expansion component encourages the spatial model to expand away from already explored spaces, particularly when a place's location estimate is underconstrained (e.g. when a place has a spring suggesting relative distance but no spring constraining direction). The system dynamics are simulated using iterative Runge--Kutta integration of the ordinary differential equation $f(t,\mathbf{x})$, until the settling criteria $\zeta(\dot{\state})$ is met. Once the settling criteria is met, an imagined spatial model is returned as the positions of the system's point-masses. The settling criteria is defined as: \begin{equation} \begin{split} \zeta(\dot{\state}) = \sqrt{\ddot{x}_i^2 + \ddot{y}_i^2} < L_a \quad &\land \quad \sqrt{\dot{x}_i^2 + \dot{y}_i^2} < L_v \\ \forall \enspace i=1, &\dots, m \label{eq:settling_criteria} \end{split} \end{equation} where $L_a$ is the acceleration threshold at which a point-mass is deemed to be settled, and $L_v$ is the velocity threshold. The two conditions combine to continue simulating the system while any point-mass is in motion, or accelerating due to unbalanced forces. System dynamics cycle energy between spring tension and point-mass motion as they explore possible layouts for the imagined spatial model. Friction drives the system to a minima where motion ceases, denoting the most representative layout. The graph shown in the imagination phase of Fig. \ref{fig:example_process} depicts the total energy over system time. To highlight the subtle modelling differences between spatial layout and spatial hierarchy, the final imagined spatial models are shown split by relational clause type in the bottom right of the figure. \subsection{Reconciling Imagination with Observation through Locational Clauses} Sweeping assumptions of distances, scales, sizes, and directions are made in the spatial model to imagine spatial layouts solely from symbols, and these will likely conflict with the robot's observations of its environment. No single set of assumptions apply to every built environment; assumptions must be adapted for differences in scale, structure, and between indoor or outdoor environments. The link between symbols and the real world environment in locational clauses provides information that can help inform these assumptions. We use this information to align the imagined spatial models in the abstract map with reality, and refine assumptions through experience. This work only incorporates locational clauses for the robot's frame of reference (see \cite{schulz2015robot} for an approach that could be adapted to enhance the system described below). When conflicts between imagination and reality occur, they are reconciled in the spatial model by trusting observations over what has been imagined solely from symbols. To exert authority in the imagined spatial model two tools are employed: fixing point-masses and changing spring stiffness. Upon observing a cue at $({}^\refframe{W}x,{}^\refframe{W}y)$ describing the relative distance and direction to $p \in P$ as $r \in \{\varnothing,\mathbb{R}\}$ and $\theta \in \{\varnothing,\mathbb{S}^1\}$ respectively, a fixed point-mass is added at $({}^\refframe{W}x,{}^\refframe{W}y)$ in the malleable spatial model. Here $\varnothing$ is used when a cue doesn't communicate distance or direction (for example a sign with only an arrow gives no $r$ value). Springs are added between point-mass $\Theta_{\iota(p)}$ and the fixed point-mass, with a high stiffness coefficient. The high stiffness means springs created from observations will override suggestions from springs created by loosely imagining spatial layout solely from symbols. The robot's observations can also be used to update assumed scales in the abstract map, and exploit refined values for improved imagination. When the model was first imagined, the natural lengths of distance springs were set based on estimates of environment scale. With the benefit of real world observations, scaling factors can be manipulated to improve the abstract map's earlier estimates. Scaling factors $\alpha^a_b$ are employed for each unique unordered pair of levels $(a,b) \in (\mathbb{Z}^+)^2$ in the hierarchy graph. For instance, the example in Fig. \ref{fig:example_process} has three levels with level 1 corresponding to rooms, level 2 to buildings, and level 3 to university campuses. The hierarchy creates six scaling factors ($\alpha^1_1$, $\alpha^1_2$, $\alpha^1_3$, $\alpha^2_2$, $\alpha^2_3$, and $\alpha^3_3$), where $\alpha^1_1$ corresponds to the average distance between adjacent rooms, $\alpha^2_2$ the average distance between adjacent buildings, $\alpha^1_2$ the average distance between any room within a building, etc. Scaling factors between two hierarchy levels are given a default value until a distance $r_o$ is observed between the levels. $r_o$ is obtained when labels for both endpoints of a distance spring have been observed. Once a distance has been observed, the spatial model uses a scaling factor instead of the default value. Scaling factors are calculated by comparing the observed length $r_o$ of springs with their initial estimated natural length $r_n$. A scaling factor for hierarchy levels $(a,b)$ is the weighted arithmetic mean of the observed scaling error ($r_o / r_n$) for the $n$ distances observed between toponyms in $a$ and $b$: \begin{equation} \alpha^a_b = \frac{\sum\limits_{i = 1}^{n} K_i \frac{r_{o_i}}{r_{n_i}}}{\sum\limits_{i=1}^{n} K_i} \label{eq:scaling_factors} \end{equation} where the stiffness $K_i$ is used as the weight. The effect of the scaling factors can be seen in the hierarchical springs and spatial model created in Fig. \ref{fig:example_process}, and the results shown in Section \ref{subsec:results_robot}. Lastly, an exploration scaling factor $\mathcal{E}$ is applied to the natural length of each distance spring when a goal is not found at its imagined location. This factor expands the scope of exploration in larger than expected sections of environments, like outdoor environments with less repetitive structure. $\mathcal{E}$ has an initial value of $1$ and is increased multiplicatively by an exploration step $\Delta_{\explorefactor}$ when a goal is not found where it is expected. Step increases are applied until the robot observes a new navigation cue. The increases in $\mathcal{E}$ expand the spatial model, encouraging the robot to search for the goal outside of already visited areas. Upon observing a new navigation cue, $\mathcal{E}$ is reset to 1 and the normal process is resumed. \section{Experimental Process} \label{sec:procedure} There is relatively little prior work on symbolic navigation of unseen places and no relevant benchmarks for evaluating navigation performance. This section describes our approach to performance evaluation in real world built environments. Human participants, with the symbolic navigation abilities that motivated this research, were used as a performance baselines. The symbolic navigation task used for evaluation was one that is a common part of the human symbolic navigation experience: finding an animal at the zoo. The research was approved by the QUT Human Research Ethics Committee (approval number 1800000392). \begin{figure}[p] \centering \input{./figs/experiment_maps.tex} \caption{Spatial and hierarchical maps of the fictional zoo used for both the human and robot studies.} \label{fig:experiment_maps} \end{figure} \begin{figure}[t] \centering \input{./figs/experiment_app.tex} \caption{Screenshots from the mobile application human participants used to detect AprilTags. Each detection is highlighted, and the decoded symbolic spatial information displayed.} \vspace{\shrinkfactor} \label{fig:experiment_app} \end{figure} We used a fictional zoo environment for the experiments, with animal enclosures grouped into five themed areas branching off the ``Zoo Foyer'' as shown in Fig. \ref{subfig:experiment_map}. The zoo environment encompassed the entire floor of a university campus building. To level the playing field for humans and robots, all navigation cues (place names and direction boards) were encoded in AprilTags \cite{wang2016apriltag} which were physically placed in the environment. AprilTags employed a combination of text and arrows to emulate labels, natural language descriptions, directional signs, and signboards (examples can be seen in Fig. \ref{fig:experiment_app}). Symbolic spatial information was encoded in the AprilTags through a single static mapping for all trials. Places in the environment and navigation cues purposely had no visual resemblance of what they were representing (animals and AprilTags respectively). This removed insights like ``this looks like an aviary'', ``that looks like a Giraffe over in the far corner'', using contextual knowledge to ignore irrelevant environment text, and long-distance cue recognition---all of which are outside the scope of this research. Each symbolic navigation task started outside the zoo, near the ``Exit'', and was deemed complete upon observation of the symbolic goal's label. The experiment consisted of 50 trials with 25 human participants completing a single navigation task each, and the robot completing 25 tasks starting each trial with no prior navigation knowledge. Human participants were aged between 18 and 59, with university education either completed or in progress, and had never previously visited the experiment environment. Trials were split into five unique navigation goals (``Lion'', ``Kingfisher'', ``Polar Bear'', ``Anaconda'', and ``Toilets'') with attempts from five human participants and five from the robot for each goal. Participants were also provided with a graph of the zoo's spatial hierarchy (shown in Fig. \ref{subfig:experiment_graph}). For humans this was a printed sheet, whereas the robot used the graph to create relational clauses that were preloaded into the abstract map. The spatial hierarchy graph normalised contextual knowledge between participants, and removed discrepancies in contextual interpretations like whether the ``Cockatoo'' would be in the ``Bird Aviary'' or ``Outback Adventure''. Tools were given to both human and robot participants to read the information encoded in AprilTags. A purpose-built mobile phone application was provided to human participants as shown in Fig. \ref{fig:experiment_app}, and the robot employed a detector monitoring images from a panoramic camera. Distance travelled was the performance measure used for both robot and human trials, with audio also recorded in human trials. In human trials, the path travelled was recorded manually on a map and directly from the robot's raw odometry data during robot trials. A fair basis for comparison was established by retracing the paths recorded for human trials with the same robot used in the robot trials. Audio was recorded during each human trial and in a brief post-interview where discussions were guided through three topics: describing navigation experiences, exploring what guided navigation (and if cues besides AprilTags played a role), and comparing AprilTag cues to the human navigation cues typically found in built environments. We designed the experiment to maximise the validity of the comparison between robot and human performance. Additional experimental controls included ensuring the robot and human were given the same contextual knowledge via the graph in Fig. \ref{subfig:experiment_graph}, limiting the AprilTag detection range in the mobile phone application to match the robot's \SI{4}{\metre} detection range, and requiring human participants to have never previously visited the experimental environment. \subsection{Parameterisation in the Abstract Map} For the robot trials, a number of parameters controlled the imagination of spatial models and incorporation of symbol observations. The parameters are listed below, with notes about their selected values: \begin{itemize} \item Each preposition $s$ is hand-mapped to a set of springs with minimal value tuning of parameters required (only four $\theta_n \in \{\pm\pi,\pm\pi/2\}$ and two $r_n \in \{1,0.5\}$ values were used across all preposition interpretations). \item The system used five different stiffness values, $K \in \{2.5,1,0.5,0.1,0.01\}$ ($2.5$ was reserved for observation springs attached to fixed point-masses, with the remaining values used in preposition to spring conversion). \item All point-masses had a mass of \SI{1}{\kilogram}. \item A viscous friction coefficient of $0.1$ was used, with higher values introducing unnecessary overshoot in spring motion and increasing settling time. \item $0.01$ was the expansion coefficient used to scale the proportional relationship between distance from centre of explored mass and force in $\lambda(\mathbf{x},C(t),\Theta)$ (larger values caused the spatial model to stretch). \item $L_a$ and $L_v$ were both set to $0.1$. Increasing the values caused the imagination phase to finish before point-masses have finished moving, whereas low values resulted in delayed detection of motion completion. \item The zoo hierarchy had three levels, with rooms, themed areas, and zoos corresponding to levels 1-3 respectively. Scaling factors were given the following default starting values: $\alpha^1_1 = \SI{4}{\metre}$, $\alpha^1_2 = \SI{5}{\metre}$, $\alpha^1_3 = \SI{20}{\metre}$, $\alpha^2_2 = \SI{15}{\metre}$, $\alpha^2_3 = \SI{15}{\metre}$, and $\alpha^3_3 = \SI{50}{\metre}$. \item A $25\%$ exploration step was used to rapidly expand imagined location estimates when goals were not found. \end{itemize} \subsection{Robot Configuration} An Adept GuiaBot mobile base was used in the robot experiments, with panoramic images from a 360\textdegree{} Occam camera scanned for AprilTags. The standard ROS navigation stack provided the SLAM and spatial navigation components from Figure {\ref{fig:system_outline}}, with the robot controlled by pose goals produced from the abstract map.
1504.03651
\section{Introduction} Devices utilising thermal atomic vapour cells are of increasing interest since they offer high precision with a compact and relatively simple apparatus. Examples of atomic vapour cell devices include magnetometers~\cite{Kominis2003,Budker2007}, gyroscopes~\cite{Lam1983,Kornack2005}, clocks~\cite{Knappe2004a,Camparo2007}, electric field sensors~\cite{Mohapatra2008}, microwave detectors~\cite{Sedlacek2012,Sedlacek2013} and cameras~\cite{Bohi2012,Horsley2013a,Fan2014a}, quantum memories~\cite{Julsgaard2004,Lvovsky2009,Sprague2014}, optical isolators~\cite{Weller2012d}, laser frequency references~\cite{Affolderbach2005} and narrowband optical notch~\cite{Miles2001,Uhland2015} and bandpass filters~\cite{Ohman1956,Beckers1970}. Making these devices more compact, power efficient and lighter is currently a burgeoning area of research~\cite{Mescher2005,Ompact2008,Mhaskar2012}, since it allows them to become practical consumer products. Particularly for devices that require an applied magnetic field, compact vapour cells~\cite{Sarkisyan2001,Liew2004,Knappe2005,Su2009,Baluktsian2010,Tsujimoto2013,Straessle2014} offer the additional advantage that small permanent magnets can be used to create a uniform magnetic field across the vapour cell~\cite{Weller2012c}, while consuming no power. However, when confining the atomic vapour in small geometries, additional effects may need to be taken into account. For example, atom-surface interactions become important for atoms in hollow-core fibres~\cite{Epple2014} or nano-metric thin cells~\cite{Whittaker2014}. Also, cells with a shorter path length require the medium to be heated more to increase the atomic number density. Not only will this increased heating cause more Doppler broadening but the increased number density will mean that self-broadening~\cite{Lewis1980,Weller2011} must be taken into account. In this article we investigate the effects of these homogeneous and inhomogeneous broadening mechanisms on the performance of Faraday filters. Faraday filters were proposed in 1956 by \"{O}hman~\cite{Ohman1956} for astrophysical observations. They were later applied to solar observations~\cite{Agnelli1975,Cacciani1978} and used to frequency stabilize dye lasers~\cite{Sorokin1969,Yabuzaki1977,Endo1978}. In the early 1990s the subject of Faraday filters was revived~\cite{Dick1991,Menders1991}. Such filters have received increasing attention ever since, owing to their high performance in many applications. Faraday filters now find use in remote temperature sensing~\cite{Popescu2004}, atmospheric lidar~\cite{Chen1996,Fricke-Begemann2002,Huang2009,Harrell2010}, diode laser frequency stabilisation~\cite{Wanninger1992,Choi1993,Miao2011}, Doppler velocimetry~\cite{Cacciani1978,Bloom1991,Bloom1993}, communications~\cite{Junxiong1995} and quantum key distribution~\cite{Shan2006} in free space, optical limitation~\cite{Frey2000}, filtering Raman light~\cite{Abel2009}, and quantum optics experiments~\cite{Siyushev2014,Zielinska2014a}. The Faraday-filter spectrum is sensitive to many experimental parameters and so a theoretical model is useful for designing filters. However, there are only a few articles describing computer optimization~\cite{Kiefer2014,Zentile}. In this article we use computer optimization to find the best working conditions for compact Faraday filters. We find homogeneous broadening is particularly important for Faraday filters in `wing' operation~\cite{Zielinska2012,Zentile} and less so for `line-centre' operation~\cite{Chen1993,Kiefer2014}. The homogeneous broadening mechanism of self-broadening is particularly important to include since it is unavoidable at high density. Previous theoretical treatments of Faraday filters~\cite{Yin1991,Harrell2009,Zielinska2012} have not included the effect of self-broadening; we find that self-broadening is important for short cell lengths and must be included in the model in order to find the best working parameters. The structure of the rest of the article is as follows: In section~\ref{sec:Theory} we introduce the typical experimental arrangement for Faraday filters and qualitatively explain how they work. In section~\ref{sec:Opt} we explain the computer optimization technique used to find the best working parameters and show the importance of self-broadening for shorter cells. Section~\ref{sec:Exp} describes an experiment performed to compare with the theoretical optimizations. The results show that buffer gas broadening and isotopic purity strongly effect the filter spectrum. Finally we draw our conclusions in section~\ref{sec:Conc}. \section{Theory and Background}\label{sec:Theory} An atomic Faraday filter is formed by surrounding an atomic vapour cell with crossed polarizers (see figure~\ref{fig:setup}). When an axial magnetic field ($B$) is applied across the cell, the medium becomes circularly birefringent causing the plane of polarization to rotate as light traverses the cell (the Faraday effect~\cite{Budker2002}), which leads to some transmission through the second polarizer. For a dilute atomic medium the effect is negligibly small except near resonances, and since atomic resonances are extremely narrow, this results in a narrowband filter. If the signal being detected is unpolarized then half of the light will not pass through the first polarizer. This limits the filter transmission to 50\%. However using a polarizing beam splitter allows one to arrange two Faraday filters to allow each polarization component through with little loss~\cite{Fricke-Begemann2002}. \begin{figure}[t] \includegraphics[width=\columnwidth]{figure1.eps} \caption{Illustration of the experimental arrangement. A micro-fabricated $1\times1\times1\,$mm$^3$ $^{87}$Rb vapour cell is placed between two axially magnetized ring magnets. This arrangement is then placed between two crossed polarizers, forming the filter. The filter is tested by passing a laser beam through and onto a photodiode. The filter transmission is defined as the intensity of light transmitted through the second polarizer ($I_x$) divided by the initial intensity before the cell ($I_0$). Light out of the passband frequency is either scattered in the cell or rejected at the second polarizer ($I_y$).} \label{fig:setup} \end{figure} In a similar way, if the magnetic field is perpendicular to the light propagation direction, one can also make a `Voigt filter'~\cite{Menders1992} which exploits the Voigt effect~\cite{Franke-Arnold2001}. However, in this paper we will only consider Faraday filters. We have chosen to consider the D$_2$ ($\mathrm{n}^2\mathrm{S}_{1/2}\rightarrow \mathrm{n}^2\mathrm{P}_{3/2}$) lines of potassium and rubidium where $\mathrm{n}=$ 4 or 5 respectively. For a given cell length the parameters that affect the Faraday filter transmission spectra are the applied field ($B$) and cell temperature ($T$). The effect of $T$ is predominantly to change the atomic number density~\cite{Alcock1984} and secondly Doppler width, while $B$ causes the circular birefringence and dichroism. In general the filter spectrum is a complicated function of these two parameters, due to the large number of non-degenerate Zeeman shifted transitions, each with different transition strengths in which their lineshape profiles partially overlap. However, it is possible to accurately compute the filter profile with a computer program~\cite{Zentile2014a,Zielinska2012,Kiefer2014}. We use the ElecSus program to calculate the filter spectrum. The full description of how the program works can be found in ref.~\cite{Zentile2014a}; here we summarize the key points. An atomic Hamiltonian is built up from contributions from hyperfine and magnetic interactions. The eigenvalues allow the transition frequencies to be calculated while the eigenstates can be used to calculate their strengths. The electric susceptibility is then calculated by adding the appropriate (complex) line-shape at each transition frequency, scaled by its strength. The imaginary part of these line-shapes have a Voigt profile~\cite{Corney1977}, which is a convolution between inhomogeneous broadening (Gaussian profile from Doppler broadening) and homogeneous broadening (Lorentzian profile). Typically, the full-width half maximum of the Lorentzian has contributions from natural broadening ($\Gamma_0$) and self-broadening ($\Gamma_\mathrm{self}$) and buffer gas pressures ($\Gamma_\mathrm{buf}$). The real part of the electric susceptibility can be used to calculate dispersion, whilst the imaginary part can be used to calculate extinction~\cite{Jackson1999}. This allows the calculation of a variety of experimental spectra, of which the Faraday filter spectrum is one. The result is given as a function of global detuning, $\Delta$, which is defined as $\Delta \equiv \omega - \omega_0$, were $\omega$ is the angular frequency of the laser light and $\omega_0$ is the global line-centre angular frequency. \section{Optimization}\label{sec:Opt} \subsection{The simple approach}\label{sec:Simple} The optical signal in a vapour cell device comes from the interaction of the light with all the atoms in the beam path. This means that for compact vapour cells with shorter path lengths, the atomic number density must increase to compensate for the loss of signal. For example the Faraday filter spectrum can be thought of as some function of the product $\sigma\mathcal{N}L$, where $\mathcal{N}$ is the number density, $L$ is the length of the medium and $\sigma$ is the microscopic atomic cross-section (describing the effect of extinction and dispersion due to a single atom). Assuming $\sigma$ remains constant, we can achieve the same filter when reducing $L$ by increasing $\mathcal{N}$ by the same factor. Therefore, once good parameters of $B$ and $T$ are found for a particular cell length, we can find the new appropriate parameters by changing the temperature such that $\mathcal{N}L$ remains constant. However, this argument will break down at some point since $\sigma$ is not generally constant. By increasing the cell temperature we also change the amount of Doppler broadening. Also, at high densities, interactions between atoms cause self-broadening, which can be modelled as $\Gamma_\mathrm{self}=\beta\mathcal{N}$, where $\beta$ is the self-broadening parameter~\cite{Weller2011}. Both the Doppler and self-broadening will affect $\sigma$. To find where these effects become important we need to compare it with a computer optimization technique, which can find the best parameters at each cell length. \subsection{Computerized optimization procedure}\label{sec:CompOpt} Efficiently finding the optimal experimental conditions for a Faraday filter requires three tools. First a computer program is needed which can calculate the spectrum with the experimental conditions as parameters. Secondly, a definition of a figure of merit (or conversely a `cost function'~\cite{Russel2003}) is then needed to numerically quantify which filter spectra are more desirable. Finally, this figure of merit is then maximised (or the cost function is minimised) by varying the parameters according to some algorithm. We used a global minimization technique~\cite{Hughes2010} which includes the random-restart hill climbing meta-algorithm~\cite{Russel2003} in conjunction with the downhill simplex method~\cite{Nelder1965} to find the values of $B$ and $T$ which maximized our figures of merit. This routine was used in conjunction with the ElecSus program~\cite{Zentile2014a} which calculated the filter spectra. ElecSus was used because it includes the effect of self-broadening, which is essential for this study, and also because it evaluates the filter spectrum quickly ($<1\,$s) which makes this kind of optimization practical, since the filter spectra need to be evaluated a few thousand times. \subsection{Figure-of-merit choices}\label{sec:FOMs} The signal-to-noise ratio of a narrowband signal in broadband noise is greatly improved by using a bandpass filter. For the case of white noise, the noise power is directly proportional to the bandwidth of a top-hat filter. For a more general filter profile, the equivalent-noise bandwidth (ENBW) is a quantity which is inversely proportional to the signal to noise ratio, and is defined as \begin{equation} \mathrm{ENBW}=\frac{\int^\infty_0 I_x(\nu)\mathrm{d}\nu}{I_x(\nu_s)}, \label{eq:ENBW} \end{equation} where $I_x$ is the light intensity after the filter, $\nu$ is the optical frequency and $\nu_s$ is the signal frequency. If there is freedom in the exact position of the signal frequency we can set it to the frequency which gives the maximum transmission ($I_x(\nu_s)=I_\mathrm{max}$). \begin{figure} \includegraphics[width=\columnwidth]{figure2.eps} \caption{The figures of merit of filter spectra found by optimization or extrapolation. The hollow (olive) circles show the figure of merit found by taking the optimal magnetic field and temperature of the 100 mm length cell and changing the temperature such that $\mathcal{N}L=\mathrm{const}$. The solid (purple) dots show the figure of merit maximized by changing the magnetic field and temperature for each cell length. The main panel shows the results of a wing-type filter using an isotopically pure $^{87}$Rb vapour, the inset shows a line-centre filter with a potassium vapour at natural abundance. Both are modelled for the D$_2$ line of the respective element.} \label{fig:FomOpt} \end{figure} Although minimising the ENBW is desirable, this usually comes with a reduction in transmission~\cite{Kiefer2014}. Using the following figure of merit, \begin{equation} \mathrm{FOM} = \left.\frac{I_\mathrm{max}^2}{\int^\infty_0 I_x(\nu)\mathrm{d}\nu} = \frac{I_\mathrm{max}}{\mathrm{ENBW}}\right\rvert_{I_x(\nu_s)=I_\mathrm{max}}, \label{eq:FOM1} \end{equation} we can maintain a reasonably large transmission~\cite{Kiefer2014}, while minimizing the ENBW. When optimising using this figure of merit we often find a wing-type filter spectrum~\cite{Zentile}. In order to compare with line-centre filters we also use the following figure of merit, \begin{equation} \mathrm{FOM^\prime} = \left.\frac{I_x^2(\nu_s)}{\int^\infty_0 I_x(\nu)\mathrm{d}\nu}\right\rvert_{\nu_s=\omega_0/2\pi}, \label{eq:FOM2} \end{equation} where we set $\nu_s$ to be the line-centre frequency. To calculate these figure-of-merit values we simulate filter spectra with a range of 60 GHz around the atomic weighted line-centre with a 10 MHz grid spacing. The integration is performed by a simple rectangle method. The limitation to the accuracy of calculated the figure-of-merit values comes from the grid spacing; a finer grid spacing of 1 MHz only improves the accuracy by 0.2\% at best. \subsection{Results for wing and line-centre filters} \begin{figure}% \includegraphics[width=\columnwidth]{figure3.eps} \caption{Atomic number density after computer optimisation $(\mathcal{N}_\mathrm{opt})$ multiplied by cell length $(L)$, as a function of $L$. The optimisation involves changing cell magnetic field and temperature of the cell in order to maximise the figure of merit at each cell length. The dark grey (purple) dots show the results when self-broadening is included in the model for the filter spectrum, while the light gray (blue) circles show the result without self-broadening. The main panel shows results for an isotopically pure $^{87}$Rb vapour while the inset gives the results for potassium at natural abundance.} \label{fig:DopBufComp} \end{figure} The figure of merit of equation~\eqref{eq:FOM1} was maximized while simulating an isotopically pure $^{87}$Rb vapour with $L=100\,$mm, finding the optimal values of $B$ and $T$ to be $67.3\,$G and $60.9\,^\circ$C respectively. We then used the simple approach (section~\ref{sec:Simple}) to find the new values of the vapour cell temperature for a range of shorter cell lengths, and then evaluated the figure-of-merit values. In addition the figure-of-merit values were re-optimized (section~\ref{sec:CompOpt}) for each cell length to see if further improvement could be found. Figure~\ref{fig:FomOpt} shows the comparison of the two methods. We can see that the figure of merit changes with cell length, as is expected, since line broadening means that the filter spectra cannot be made identical for different cell lengths. We can also see that moving to shorter cells has a deleterious effect, but can be somewhat mitigated by re-optimization at each cell length. \begin{figure} \includegraphics[width=\linewidth]{figure4.eps} \caption{Filter transmission ($I_x/I_0$, solid black curve) and cell transmission ($(I_x+I_y)/I_0$, dashed blue curve) as a function of linear detuning $(\Delta/2\pi)$, zoomed around the region of peak transmission. The left panel models a $^{87}$Rb vapour on the D$_2$ line, while the right panel models the K D$_2$ line; both of length 1 mm. The cell parameters were set to $B=85.8\,$G and $T=127.8\,^\circ$C ($\mathcal{N}=3.2\times 10^{13}\,\mathrm{cm}^{-3}$) for $^{87}$Rb, and $B=864\,$G and $T=136.1\,^\circ$C ($\mathcal{N}=6.0\times 10^{12}\,\mathrm{cm}^{-3}$) for K. The uppermost lines were calculated with a Lorentzian width given by natural broadening only ($\sim 6\,$MHz) while the middle and lower lines have a further 50 and 100 MHz of Lorentzian width. The global line-centres occur at 384.23042812~THz~\cite{Barwood1991,Ye1996} for the Rb D$_2$ line and 391.01617854~THz~\cite{Falke2006} for the K D$_2$ line.} \label{fig:TransNfilter} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{figure5.eps} \caption{Computer optimized Faraday filter spectra as a function of linear detuning. The optimal parameters were found to be $B=67.3\,$G and $T=60.9\,^\circ$C for the 100 mm long $^{87}$Rb vapour, $B=85.8\,$G and $T=127.8\,^\circ$C for the 1 mm long $^{87}$Rb vapour, $B=801\,$G and $T=70.2\,^\circ$C for the 100 mm long K vapour, and $B=864\,$G and $T=136.1\,^\circ$C for the 1 mm long K vapour. The ENBW is 2.0 and 2.2 GHz for the $^{87}$Rb vapour at 100 and 1 mm length receptively, whereas for the K vapour the ENBW is 2.4 and 2.6 GHz at 100 and 1 mm length receptively.} \label{fig:WingVCentre} \end{figure} The inset of figure~\ref{fig:FomOpt} shows the result of a similar analysis for a potassium vapour at natural abundance~\cite{Rosman1998}, this time using the figure of merit of equation~\eqref{eq:FOM2} to produce a line-centre profile filter. The main difference in the results is that the figure of merit is less affected by decreasing cell length than the wing-type filter. The reason for the difference between wing-type and line-centre filters can be elucidated by plotting the $\mathcal{N}L$ product as a function of $L$ after computerized optimization at each cell length, as shown in figure~\ref{fig:DopBufComp}. By repeating the optimization with the effect of self-broadening `turned off', we can see that the $^{87}$Rb wing-type filter is affected far more by self-broadening than the K line-centre filter. One can understand this difference in the behaviour of the two types of filters by inspection of the spectra (see figure~\ref{fig:TransNfilter}). Increases in Lorentzian broadening cause a decrease in transmission through the vapour cell at the filter frequency. This happens far more for the wing-type than line-centre filters. \begin{figure*} \includegraphics[width=0.8\linewidth]{figure6.eps} \caption{Experimental and theoretical Faraday-filter spectra on the rubidium D$_2$ line as a function of linear detuning ($\Delta/2\pi$) from the weighted line-centre (384.23042812~THz~\cite{Barwood1991,Ye1996}). A 1 mm length vapour cell was used with an isotopic ratio of 99\% $^{87}$Rb to 1\% $^{85}$Rb. The solid black line in panel (a) shows the experimental filter spectrum and the dashed (red) line shows the fit to theory that includes the natural, self, and buffer gas induced ($\Gamma_\mathrm{buf}$) Lorentzian broadening effects. Below panel (a) the residuals, R, (the difference between experiment and theory) are plotted. There is an RMS deviation between experiment and theory of 0.6\%. The inset of panel (a) shows the effect of $\Gamma_\mathrm{buf}$ on transmission (solid purple line) and ENBW (dashed blue line) of theoretical filter spectra. The vertical dashed line marks the amount of buffer gas broadening seen in the experiment. Panel (b) shows a zoomed in region around the peak at 3.1 GHz, including theoretical curves with natural homogeneous broadening only (dashed blue) and with natural and self-broadening (solid blue).} \label{fig:RbBuffer} \end{figure*} Changes in transmission on the wing of an absorption resonance due to Lorentzian broadening is due to the fact that Gaussian broadening decreases much faster than Lorentzian broadening with detuning from resonance~\cite{Siddons2009}. A higher optical depth transition feature will show this effect more strongly. This is one of the differences between wing and line-centre type filters. Wing-type filters rely on the sharp decrease in transmission caused by the atomic resonances to create narrow filter transparencies. This means that the circular dichroism cannot be too large since both polarizations need be scattered in the cell to sharply reduce the filter transmission to zero. However, a small amount of dichroism means that there is a small relative birefringence, which means that a high number density is required to create the large absolute birefringence necessary for the rotation of $\pi/2$. Conversely, the line-centre filter works by having a large circular dichroism, such that the transitions which absorb each polarisation of light are almost completely separated. We can see this in figure~\ref{fig:TransNfilter} where there the cell transmission is optically thick for just one circular polarization on either side of the transparency (causing $\approx50\%$ transmission of linearly polarized light through the cell and $\approx25\%$ transmission though the filter). This large dichroism comes with a large relative birefringence, meaning that the number density can be lower for a line-centre filter. Line broadening clearly has a deleterious effect, however, good filter spectra for shorter vapour cells can be found so long as we change both the $B$ and $T$ to re-optimize the filter. This is shown in figure~\ref{fig:WingVCentre} where it is evident that the optimal filters achieved for a 1 mm cell length closely match that at 100 mm length. \section{Experiment}\label{sec:Exp} To compare theory with experiment for a compact cell, we used a micro-fabricated $1\times1\times1\,$mm$^3$ isotopically enriched $^{87}$Rb cell~\cite{Knappe2005}. The isotopic abundance of $^{85}$Rb was found by transmission spectroscopy to be $(1.00\pm0.02)\%$, in a similar way to that shown in ref.~\cite{Weller2012c}. This isotopic impurity affects the filter spectra, therefore the filter parameters were optimized taking this into account. We found the optimal parameters to be $B=72.0\,$G and $T=137.5\,^\circ$C, which gave a transmission peak at a detuning of 3.1 GHz. The experimental Faraday filter arrangement is illustrated in figure~\ref{fig:setup}. The cell was placed in an oven to heat the cell near the optimal temperature, while the applied axial magnetic field was produced using a pair of permanent ring magnets. The field inhomogeneity across the cell was less than 1\%. Two crossed Glan-Taylor polarizers were placed around the cell to form the filter. A weak-probe~\cite{Smith2004,Sherlock2009} beam from an external cavity diode laser was focussed using a lens (not shown in figure~\ref{fig:setup}) with a 30 cm focal length, and was sent through the filter such that the focus was approximately at the location of the cell. After the filter, the beam was focussed using a 5 cm focal length lens onto an amplified photodiode. The laser frequency was scanned across the Rb D$_2$ transition, and was calibrated using the technique described in ref.~\cite{Siddons2008}. Panel (a) of Figure~\ref{fig:RbBuffer} shows the experimental filter spectrum plotted with a fit to theory using ElecSus~\cite{Zentile2014a}. The fit parameters were found to be $B=73\,$G and $T=138.5\,^\circ$C. The first thing to note is that, due to the 1\% $^{85}$Rb impurity, the peak transmission occurs at $\Delta/2\pi=3.1\,$GHz rather than near -1.3 GHz if the cell were isotopically pure (see Figure~\ref{fig:WingVCentre}). Also, a further 42 MHz of Lorentzian broadening was added in addition to $\Gamma_0$ and $\Gamma_\mathrm{self}$, due to the presence of a small quantity of background buffer gas in the vapour cell. This value was previously measured by transmission spectroscopy to be $\Gamma_\mathrm{buf}/2\pi=(42\pm1)\,$MHz. Panel (b) of Figure~\ref{fig:RbBuffer} shows the filter spectrum zoomed into the main peak. In addition to the experimental and theory fit is the filter spectrum for the optimization that did not include the buffer gas broadening. We can see that the additional broadening drastically affects the filter transmission. Also by removing the effect of self-broadening from the theory, we again see a larger transmission. Table~\ref{tab:BroadeningComps} quantitatively compares the transmission, ENBW and FOM values for the curves shown in figure~\ref{fig:RbBuffer}. The inset of Panel (a) shows the filter transmission at a detuning of 3.1 GHz and the ENBW as a function of $\Gamma_\mathrm{buf}$. The transmission decreases while the ENBW increases, showing that the performance (as measured by the ratio transmission to ENBW) of this kind of Faraday filter deteriorates quickly with increasing buffer gas pressures. \begin{table} \caption{Maximum transmission ($T_\mathrm{max}$), equivalent-noise bandwidth (ENBW) and their ratio (FOM) for a 1 mm long isotopically enriched vapour cell. The magnetic field and temperature were 73 G and 138.5$^\circ$C respectively. The first row represents the fit to the experiment shown in figure~\ref{fig:RbBuffer}, while subsequent rows give the values after certain physical effects were removed (theoretically).} \begin{center} \begin{tabular}{cccc} \hline Spectrum & $T_\mathrm{max}$ & ENBW (GHz) & FOM (GHz$^{-1}$) \\ \hline Fit to Experiment \rule{0pt}{3.0ex} & 0.55 & 3.0 & 0.18 \\ No buffer gas \rule{0pt}{3.5ex} & 0.77 & 2.6 & 0.29 \\ \begin{tabular}{c} No self-broadening \rule{0pt}{3.5ex}\\ or buffer gas \end{tabular}& 0.83 & 2.6 & 0.31\\ \hline \end{tabular} \end{center} \label{tab:BroadeningComps} \end{table} The amount of broadening due to buffer gas pressure that we observe, typically corresponds to approximately 1-2 Torr of buffer gas~\cite{Rotondaro1997,Zameroski2011}. The fact that this small pressure affects the filter spectra by a large amount shows that the wing-type Faraday filter spectra are very sensitive to buffer gas pressure. It has previously been shown that non-linear Faraday rotation can be a sensitive probe of buffer gas pressure~\cite{Novikova2002}, being non-invasive and using a simple apparatus. Our results show that it may be possible to use the linear Faraday effect instead, for which it is easier to model the effect of buffer pressure. However, it is not yet clear if this is more sensitive than using transmission spectroscopy~\cite{Wells2014}. \section{Conclusions}\label{sec:Conc} We have described an efficient computerized method to optimize the cell magnetic field and temperature for short cell length Faraday filters. From theoretical spectra we see that wing-type filters in particular are deleteriously affected by homogeneous broadening, while line-centre filters are less affected. We perform an experiment to realise a wing-type filter using a micro-fabricated 1 mm length $^{87}$Rb vapour cell, and find excellent agreement with theory. While buffer gasses can enhance some signals using vapour cells~\cite{Brandt1997}, they should be kept to a minimum in order to achieve the narrowest Faraday filters with the highest transmission. \begin{acknowledgments} We thank W. J. Hamlyn for his contribution to the experiment. We are grateful to S. Knappe for providing the vapour cell used in the experiment. We acknowledge financial support from EPSRC (grant EP/L023024/1) and Durham University. RSM was funded by a BP Summer Research Internship. The data presented in this paper are available from \url{http://dx.doi.org/10.15128/kk91fk598}. \end{acknowledgments}
1504.03708
\section*{Background \& Summary} Proteins are the machinery of life. We here present a first-principles study of the conformational preferences of their basic building blocks -- specifically, as summarized in Figure~\ref{fig:AA_scheme}: 20 proteinogenic amino acids and dipeptides, with different possible protonation states, and the conformational space changes resulting from attaching six divalent cations, i.e., Ca$^{2+}$, Ba$^{2+}$, Sr$^{2+}$, Cd$^{2+}$, Pb$^{2+}$, and Hg$^{2+}$. In past studies, a wide range of different approximate electronic structure methods has been applied to some of these proteinogenic amino acids -- see, for example, references \cite{jcc30_2105, mp106_2289, pbmb71_243, msr31_391, psfb48_107,jcc29_407,jms346_141,jacs114_9568,pccp13_18561,pccp11_3921,jpca102_5111,jcc28_1817,jmst671_77,jmst332_251,jctc6_3066,jacs119_5908,jpca115_9658,jcp127_154314,jcp137_75102,jpca112_3319,ijms283_56,jcc18_1609,jpca114_5919,jpca116_3247,jpca115_2900,jpca114_7583,jpca109_2660,cpl453_1,ijqc112_1526,jcp118_1253,jcp122_134313,jmst953_28,jmst631_277,jmst719_153,jmst666_273,jmst540_271,sapa73_865,pccp10_1248,pccp15_6097,pccp12_4899,pccp9_4698,prl91_203003,pnas104_20183,mp107_761,jpcb116_12441,ctc976_42,jpoc24_553,jpoc20_1099,jacs115_2923,jctc9_1533,njc29_1540,baldauf2012ab,ijqc65_1033,cej9_1008,jpc100_11589,cpb20_033102,Yuan2014,Karton2014,Kesharwani2015}. These studies have deepened our understanding of the conformational basics of individual building blocks, but a systematic comparison of properties of the different building blocks is complicated when relying on data from different sources. On the one hand this is due to the molecular models that may differ in protonation states and backbone capping. On the other, the simulations can differ in several ways: \begin{itemize} \item Different sampling strategies or methods to generate conformers may have been used. Search-dependent settings, like energy cut-offs, can also have a significant impact on the results. \item The levels of theory that have been applied range from semi-empirical to Hartree-Fock (HF) to density-functional theory (DFT) up to coupled-cluster calculations \cite{jcc30_2105, mp106_2289, pbmb71_243, msr31_391, psfb48_107,jcc29_407,jms346_141,jacs114_9568,pccp13_18561,pccp11_3921,jpca102_5111,jcc28_1817,jmst671_77,jmst332_251,jctc6_3066,jacs119_5908,jpca115_9658,jcp127_154314,jcp137_75102,jpca112_3319,ijms283_56,jcc18_1609,jpca114_5919,jpca116_3247,jpca115_2900,jpca114_7583,jpca109_2660,cpl453_1,ijqc112_1526,jcp118_1253,jcp122_134313,jmst953_28,jmst631_277,jmst719_153,jmst666_273,jmst540_271,sapa73_865,pccp10_1248,pccp15_6097,pccp12_4899,pccp9_4698,prl91_203003,pnas104_20183,mp107_761,jpcb116_12441,ctc976_42,jpoc24_553,jpoc20_1099,jacs115_2923,jctc9_1533,njc29_1540,baldauf2012ab,ijqc65_1033,cej9_1008,jpc100_11589,cpb20_033102,Yuan2014,Karton2014,Kesharwani2015}. \item Numerical settings, e.g., basis sets, can differ substantially and might lead to different results. \end{itemize} A further point that limits a quantitative comparison is the accessibility of the data from different studies. Energies, for example, often have to be extracted from table footnotes and/or the structural data is not always accessible in the Supporting Information of the respective articles, sometimes even only accessible as figures in the manuscript. The data set presented here overcomes such limitations by covering a comprehensive segment of chemical space exhaustively, using a large scale computational effort. This study treats 20 proteinogenic amino acids, their dipeptides and their interactions with the divalent cations Ca$^{2+}$, Ba$^{2+}$, Sr$^{2+}$, Cd$^{2+}$, Pb$^{2+}$, and Hg$^{2+}$ (see Figure~\ref{fig:AA_scheme} for an overview) on the same theoretical footing. The importance of peptide cation interactions may be highlighted by the fact that about 40\% of all proteins bind cations\cite{cr96_2239,cob3_378,jib102_1901}. Especially Ca$^{2+}$ is important in a multitude of functions, ranging, for example, from blood clotting\cite{zhou2011novel} to cell signaling to bone growth\cite{ebj39_825}. Such calcium mediated functions can be disturbed by the presence of alternative divalent heavy metal cations like Pb$^{2+}$, Cd$^{2+}$, and Hg$^{2+}$\cite{jt132671,jib102_1901,bbrc372_341}. The conformations and total energies of each molecular system are calculated from first principles in the framework of density-functional theory (DFT) \cite{pr136_b864,pr140_a1133} using the PBE generalized-gradient exchange-correlation functional\cite{prl77_3865}. Energies are corrected for van der Waals interactions using the Tkatchenko-Scheffler formalism \cite{prl102_73005}. In this formalism, pairwise $C_6 [n]/r^6$ terms are computed and summed up for all pairs of atoms. $r$ is the interatomic distance, a cut-off for short interatomic distances is applied, and $C_6 [n]$ coefficients are obtained from the self-consistent electron density. The combined approach is referred to as ``PBE+vdW'' throughout this work. This level of theory is robust for potential-energy surface (PES) sampling of peptide systems \cite{jpcl1_3465,prl106_118102,cej19_11224,doi:10.1021/jp3098268,doi:10.1021/jp402087e,doi:10.1021/jp412055r,C4CP05216A,C4CP05541A}. The curated data is provided as basis for comparative studies across chemical space to reveal conformational trends and energetic preferences. It can, for example, further be used for force-field development, theoretical studies at higher levels of theory, and as a starting point for theoretical calculations of spectra for biophysical applications. \section*{Methods } \subsection*{Molecular models} This study covers a total of 280 molecular systems (summarized in Figure~\ref{fig:AA_scheme}). The number is the product of these chemical degrees of freedom that were considered in our study: \begin{description} \item[20] proteinogenic amino acids. In case of (de)protonatable side chains, all protomers (different protonations states) were considered as well. \item[2] different backbone types, either free termini (considered in uncharged or zwitterionic form) or capped (N-terminally acetylated or C-terminally amino-methylated). \item[7] reflecting that the respective amino acid or dipeptide was considered either in isolation or with one of six different cation additions: Ca$^{2+}$, Ba$^{2+}$, Sr$^{2+}$, Cd$^{2+}$, Pb$^{2+}$, or Hg$^{2+}$. \end{description} \subsection*{Conformational search and energy functions} For the initial scan of the PES, the empirical force field OPLS-AA \cite{jacs118_11225} was employed, followed by DFT-PBE+vdW relaxations of the energy minima identified in the force field. The identified set of structures was then subjected to a further first-principles refinement step, \textit{ab initio} replica-exchange molecular dynamics (REMD). An overview of the procedure is given in Figure~\ref{fig:workflow} and the steps are described in more detail below. Force-field based (OPLS-AA) \cite{jacs118_11225} \textbf{global conformational searches (Step~1)} were performed for all dipeptides and amino acids (i) without a coordinating cation and (ii) with Ca$^{2+}$. These searches employed a basin hopping search strategy\cite{jpca101_5111, science285_1368} as implemented in the tool ``scan'', distributed with the \textsc{Tinker} molecular simulation package \cite{jcc87_1016,jpcb107_5933}. We here use an in-house parallelized version of the \textsc{Tinker} scan utility that was first used in reference \cite{doi:10.1021/jp3098268}. In this search strategy, input structures for relaxations are generated by projecting along normal modes starting from a local minimum. The number of search directions from a local minimum was set to 20. Conformers were accepted within a relative energy window of 100\,kcal/mol and if they differ in energy from already found minima by at least 10$^{-4}$\,kcal/mol. The search terminates when the relaxations of input structures do not result in new minima. After that, \textbf{PBE+vdW relaxations (Step~2)} were performed with the program FHI-aims \cite{cpc180_2175,Havu20098367,1367-2630-14-5-053020}. FHI-aims employs numeric atom-centered orbital basis sets as described in reference \cite{cpc180_2175} to discretize the Kohn-Sham orbitals. Different levels of computational defaults are available, distinguished by choice of the basis set, integration grids, and the order of the multipole expansion of the electrostatic (Hartree) potential of the electron density. For the chemical elements relevant to this work, “light” settings include the so-called \textit{tier1} basis sets and were used for initial relaxations. “Tight” settings include the larger \textit{tier2} basis sets and ensure converged conformational energy differences at a level of few meV \cite{cpc180_2175}. Unless noted otherwise, all energies discussed here are results of PBE+vdW calculations with a \textit{tier2} basis and “tight” settings. Relativistic effects were taken into account by the so-called atomic zero-order regular approximation (atomic ZORA)\cite{cpl328_107,jcp109_392} as described in reference \cite{cpc180_2175}. Previous comparisons to high-level quantum chemistry benchmark calculations at the coupled-cluster level, CCSD(T), demonstrated the reliability of this approach for polyalanine systems \cite{prl106_118102,doi:10.1021/jp412055r}, alanine, phenylalanine, and glycine containing tripeptides \cite{doi:10.1021/jp412055r}, and alanine dipeptides with Li$^+$ \cite{cej19_11224}. Further benchmarks at the MP2 level of theory are reported below in the section Technical Validation. The \textbf{refinement (Step~3)} by \textit{ab initio} REMD\cite{prl57_2607,cpl314_141} is intended to alleviate the potential effects of conformational energy landscape differences between the force field and the DFT method. In REMD, multiple molecular dynamics trajectories of the same system are independently initialized and run in a range of different temperatures. Based on a Metropolis criterion, configurations are swapped between trajectories of neighboring temperatures. Thus, the simulations can overcome barriers and provide an enhanced conformational sampling in comparison to classical molecular dynamics (MD)\cite{pccp7_3910,cpl314_141}. The simulations were carried out employing a script-based REMD scheme that is provided with FHI-aims and that was first used in reference \cite{C1FD00027F}. Computations were performed at the PBE+vdW level with “light” computational settings. The run time for each REMD simulation was 20\,ps with an integration time step of 1\,fs. The frequent exchange attempts (every 0.04 or 0.1\,ps) ensure efficient sampling of the potential-energy surface as shown by Sindhikara \emph{et al.}\cite{jcp128_24103}. The velocity-rescaling approach by Bussi \emph{et al.}\cite{jcp126_14101} was used to sample the canonical distribution. Starting geometries for the replicas were taken from the lowest energy conformers resulting from the PBE+vdW relaxations in Step 2. REMD parameters for the individual systems, i.e. the number of replicas, acceptance rates for exchanges between replicas, the frequency for exchange attempts, and the temperature range, are summarized in table S1 of the Supporting Material. Conformations were extracted from the REMD trajectories every 10th step, i.e. every 10\,fs of simulation time. In order to generate a set of representative conformers, these structures were clustered using a $k$-means clustering algorithm\cite{as28_100} with a cluster radius of 0.3\,\AA{} as provided by the MMSTB package\cite{jmgm22_377}. The resulting arithmetic-mean structures from each cluster were then relaxed using PBE+vdW with “light” computational settings. The obtained conformers were again clustered and cluster representatives were relaxed with PBE+vdW (“tight” computational settings) to obtain the final conformation hierarchies. The refinement step by REMD is essential, as shown in Figure~\ref{fig:HowMany}, which separately identifies the number of distinct conformers found in Step~2 and, subsequently, the number of additional conformers found in Step~3. After step 2, a total of 17,381 stationary points was found for the amino acids and dipeptides in isolation and in complex with Ca$^{2+}$. The refinement procedure in Step 3 increases this number to a total of 21,259 structures. Initial structures for the Ba$^{2+}$, Cd$^{2+}$, Hg$^{2+}$, Pb$^{2+}$ and Sr$^{2+}$ binding amino acid and dipeptide systems were then obtained by replacing the Ca$^{2+}$ cation in the amino acid and dipeptide structures binding a Ca$^{2+}$ cation. These structures were subsequently relaxed with PBE+vdW employing “tight” computational settings and a tier-2 basis set. This procedure results in 24,633 further conformers with bound cations. Altogether, we thus provide information on 45,892 stationary points of the PBE+vdW PES for all systems studied in this work. The numbers of conformers identified in the searches are also given in Table S2 of the Supporting Material. Tables S3 and S4 provide detailed accounts of how many structures were found for which amino acid/dipeptide in isolation or with attached cations. \section*{Data Records } The curated data, consisting of the Cartesian coordinates of 45,892 stationary point geometries of the PBE+vdW PES (the main outcome of our work) and their potential energies computed at the ``tight''/tier-2 level of accuracy in the FHI-aims code, is provided as plain text files sorted in directories (see Figure~\ref{fig:folders}). The PBE+vdW total energies are included since they are an integral part of the construction of our geometry data sets. Importantly, the stationary point geometries could be used as starting points to refine the total energy accuracy by higher-level methods, e.g., those discussed in ``Technical Validation'' below. The folder structure is hierarchic and straightforward. The naming scheme is explained in the following: Description of the file types: \begin{description} \item[conformer.(...).xyz] coordinates in standard xyz format in \AA{}, readable by a wide range of molecule viewers, e.g. VMD, Jmol, etc. \item[conformer.(...).fhiaims] coordinate file in FHI-aims geometry input format: for each atom of the particular system, the Cartesian coordinates are given in \AA{} (\texttt{atom [x] [y] [z] [element]}). The electronic total energy (in eV) at the PBE+vdW level is given there as a comment. \item[control.in] FHI-aims input file with technical parameters for the calculations. Please note that these files also include the exact specifications of the “tight” numerical settings for all included elements. \item[hierarchy\_PBE+vdW\_tier-2.dat] in each final subfolder, contains three columns: number of the conformer, total energy (in eV, PBE+vdW, tier-2 basis set, “tight” numerical settings, computed with FHI-aims version 031011), and relative energy (in eV, relative to the respective global minimum). \end{description} The curated data is publicly available from several sources: \begin{enumerate} \item A website dedicated to this data set has been set up\footnote{http://aminoaciddb.rz-berlin.mpg.de} and allows users to browse and download the data and to visualize molecular structures online. \item From the NOMAD repository\footnote{http://nomad-repository.eu} the data is available via the DOI 10.17172/NOMAD/20150526220502\footnote{http://dx.doi.org/10.17172/NOMAD/20150526220502} [Data citation 1]. \item In addition, the data has been uploaded to DRYAD\footnote{https://datadryad.org} and has been assigned the DOI 10.5061/dryad.vd177\footnote{http://dx.doi.org/10.5061/dryad.vd177} [Data citation 2]. \end{enumerate} \section*{Technical Validation } The conformational coverage for the amino acid alanine is validated by comparing to a recent study by Maul \textit{et al.}\cite{jcc28_1817} . In that reference, 10 low energy conformers of alanine were reported, spanning an energy range of approximately 0.26\,eV between the reported lowest and highest energy conformers. The level of theory used by Maul \textit{et al.} was DFT in the generalized gradient approximation by means of the Perdew-Wang 1991 functional\cite{Perdew91}. In our case, the force field based search step with subsequent PBE+vdW relaxations yields 5 conformers. The following \textit{ab initio} REMD simulations increase the number of conformers to 15 within an energy range of 0.43\,eV. The respective conformational energy hierarchies after global search and after REMD-refinement are shown in Figure~\ref{ala_example}A. The results of our search (with the refinement step) are in good agreement with the data from reference \cite{jcc28_1817} that is also shown in Figure~\ref{ala_example}A. Structures are shown in Figure~\ref{ala_example}B. Nine of the ten conformers identified by Maul \textit{et al.} can be confirmed. The single conformer that is missing (highlighted by an X in Figure~\ref{ala_example}A) is not a stationary point of the PBE+vdW potential energy surface. Conformers 14 and 15 are classified as saddle points by analysis of the vibrational modes. In order to further quantify the reliability of the DFT-PBE+vdW level of theory for peptides, beyond earlier benchmark work\cite{prl106_118102,doi:10.1021/jp412055r,cej19_11224} and especially with divalent cations, benchmark calculations were performed at the level of M{\o}ller-Plesset second-order perturbation theory (MP2) \cite{MP2-1,MP2-2} using the electronic structure program package ORCA \cite{ORCA}. Single-point energy calculations were performed for all fixed stationary-point DFT-PBE+vdW geometries in our data base for the amino acids alanine (Ala) and phenylalanine (Phe) with neutral N and C termini in isolation as well as in complex with a Ca$^{2+}$ cation. Phe was selected to represent a ``difficult'' example, i.e., the interaction of the cation with a larger aromatic side chain. The MP2 calculations did not include any frozen-core treatment (including semicore states is essential for Ca$^{2+}$) and were performed using Dunning's correlation-consistent polarized core-valence basis sets (cc-pCVnZ), with n=T/Q/5 denoting the triple-zeta, quadruple-zeta, and quintuple-zeta basis sets respectively \cite{cc-pCVnZ}. The calculated SCF (Hartree-Fock) and MP2 correlation energies were then individually extrapolated to the complete basis set (CBS) limit as follows: For SCF energies, we used the extrapolation strategy proposed by Karton and Martin \cite{SCFextrapolation}: \begin{equation} E^{n}_{SCF}=E^{CBS}_{SCF}+A e^{-\alpha\sqrt{n}}. \label{CBSSCF} \end{equation}\\ $A$, $\alpha$, and the CBS-extrapolated energy $E^{CBS}_{SCF}$ are parameters determined from a least-squares fitting algorithm applied individually for each conformer. For the MP2 correlation energies, an extrapolation scheme proposed by Truhlar \cite{MP2extrapolation} was applied: \begin{equation} E^{n}_{corr}=E^{CBS}_{corr}+B n^{-\beta}. \label{CBSMP2} \end{equation}\\ Again, $B$, $\beta$, and the CBS-extrapolated energy $E^{CBS}_{corr}$ are parameters determined from a least-squares fitting algorithm as before. A detailed account of all numbers is given in the Supporting Material (Table S5). Mean absolute errors between the density-functional approximation (DFA) relative energies and the basis-set extrapolated MP2 relative energies were calculated as follows: \begin{equation} MAE = \frac{1}{N} \sum_{i=1}^{N} |\Delta E_i^{DFA} - \Delta E_i^{MP2}+c|, \label{EquMAE} \end{equation}\\ where the index $i$ runs over all $N$ conformations of a given data set. $\Delta E_i$ in principle denotes the energy difference between conformer $i$ and the lowest-energy conformer of the set. The adjustable parameter $c$ is used to shift the MP2 and DFA conformational hierarchies versus one another to obtain the lowest possible MAE, rendering the reported MAE value independent of the choice of any reference structure. Figure~\ref{MP2bench}A shows the corresponding obtained mean absolute errors (MAE) and maximal errors ($max_{i}|\Delta E_i^{DFA} - \Delta E_i^{MP2}+c|$) of different DFA calculations -- performed with the FHI-aims code -- with respect to benchmarks on the MP2 level obtained as described above. Within FHI-aims, the accuracy of integration grids and of the electrostatic potential was also verified by comparing ``tight'' and ``really\_tight'' numerical settings, giving virtually identical results. The DFA level of theory of PBE+vdW shows a MAE well within \textit{chemical accuracy} of $\sim1\,\mathrm{kcal/mol}\approx43\,\mathrm{meV}$ for both structural sets of Ala and Phe; for Phe, the maximal error is $\sim2\,\mathrm{kcal/mol}$. We next applied a different long-range dispersion treatment, a recent many-body dispersion model based on interacting quantum harmonic oscillators denoted as MBD,\cite{MBD} showing no significant improvement for the isolated amino acids. In line with Ref. \cite{doi:10.1021/jp412055r}, applying the more expensive PBE0 \cite{PBE0Adamo} hybrid exchange correlation functional reduces the maximum deviation for Phe to $\sim57\,\mathrm{meV}$, i.e., 1.3\,kcal/mol. For Ala and Phe with neutral end caps in complex with a Ca$^{2+}$ cation, Figure~\ref{MP2bench}B compares the same set of DFAs to MP2 benchmark energy hierarchies. However, obtaining basis-set converged total energies of the same accuracy as for the isolated peptides by straightforward CBS extrapolation proved remarkably more difficult when Ca$^{2+}$ was involved. The reason is traced to the significant and slow-converging correlation contribution of the Ca$^{2+}$ semicore electrons, which leads to large and conformation dependent basis set superposition errors (BSSE). This problem was verified for MP2 calculations in the FHI-aims and ORCA codes, with several different basis set prescriptions \cite{Zhang2013}, and for CCSD(T) calculations. Standard DFAs, if sufficiently accurate, have a significant advantage in this respect since they are not subject to comparable numerical convergence problems. To yet arrive at reliable CBS-extrapolated MP2 conformational energy differences, we thus subjected the SCF and correlation energies of each Ca$^{2+}$ coordinated conformation to a counterpoise correction\cite{counterpoise1,counterpoise2} to minimize the effect of BSSE on the Ca$^{2+}$ correlation energy contribution, prior to performing CBS extrapolation as described above. For the example of Ala+Ca$^{2+}$ and assuming rigid conformers, the BSSE is estimated as: \begin{align} \begin{split} E_{BSSE} =&E_{BSSE}(Ala)+E_{BSSE}(Ca^{2+})\text{ , with}\\ &E_{BSSE}(Ala) =E^{Ala+Ca^{2+}}(Ala)-E^{Ala}(Ala)\text{ , and}\\ &E_{BSSE}(Ca^{2+})=E^{Ala+Ca^{2+}}(Ca^{2+})-E^{Ca^{2+}}(Ca^{2+}) . \end{split} \label{BSSE} \end{align} $E^{Ala+Ca^{2+}}(Ala)$ represents the energy of Ala evaluated in the union of the basis sets on Ala and Ca$^{2+}$, $E^{Ala}(Ala)$ represents the energy of Ala evaluated in the basis set on Ala, \textit{etc.} The individual BSSE errors are then subtracted from the SCF and correlation energy respectively. Phe+Ca$^{2+}$ is treated equivalently. Complete numerical details are given in the Supplementary Material (Table S6). Following this procedure, the MAE and maximal error values of various DFAs compared to MP2 are well within 1\,kcal/mol for Ala+Ca$^{2+}$. The PBE+vdW MAE for Phe+Ca$^{2+}$ amounts to just above $\sim2\,\mathrm{kcal/mol}$. The contributions from both the many-body dispersion and the hybrid PBE0 functional improve the MAE to just above $1\,\mathrm{kcal/mol}$ at to PBE0+MBD* level of theory. The maximum errors in the energy hierarchies between individual conformers are correspondingly larger. Overall, this assessment shows that our data base of conformer geometries constitutes, e.g., an excellent starting point for more exhaustive future benchmark work of new electronic structure methods for cation-peptide systems. For example, it would be very interesting to explore how F12 approaches, which address the correlation energy convergence problem explicitly, fare for a broad range of different Ca$^{2+}$ containing conformations of our peptides. As a final validation, we compare the correlation of calculated gas-phase amino acid-Ca$^{2+}$ binding energies to the binding energy hierarchy found experimentally in a study by Ho \emph{et al.}\cite{rcms21_1097}. We calculate binding energy at the PES level as \begin{equation} E_{binding} = E_{amino\,\,acid} + E_{cation} - E_{complex} . \label{Ebind} \end{equation} Energies $E$ denote the PBE+vdW Born-Oppenheimer potential energies, including $E_{amino\,\,acid}$ of the lowest-energy conformers of the isolated amino acid and $E_{complex}$ of the same amino acid in complex with a Ca$^{2+}$ ion. Experimentally \cite{rcms21_1097}, the gas-phase Ca$^{2+}$ affinities of 18 proteinogenic amino acids were determined by fragmenting Ca$^{2+}$ complexes with a combinatoric library of tripeptides at $T\approx$330~K, recording the mass spectrometric peak intensities of different fragmentation products. Quantitative average relative binding energies of $Ca^{2+}$ to different amino acids were thus inferred and can be compared to our findings, albeit with several important experiment-theory differences: (i) Entropy effects \cite{Liwo15022005,cej19_11224,doi:10.1021/jp402087e} should affect the specific complexes probed experimentally but cannot be included into the calculated numbers in the exact same way, (ii) structural differences (e.g., protonation, dimerized amino acids) between the fragments recorded in experiment and the amino acids covered here, (iii) experimental $Ca^{2+}$ affinities are not given for Asp and Glu because their gas-phase acidities, needed for data conversion, are not known. Figure~\ref{be_exp} compares the experimentally and theoretically inferred $Ca^{2+}$ binding affinities qualitatively. The $x$-axis reflects the experimental binding affinity energy hierarchy, arranging amino acids from left to right in order of decreasing binding affinity. The $y$ axis shows calculated binding energies according to Eq.~\ref{Ebind}. Perfect correlation of the experimental and calculated hierarchies would imply a strictly monotonic decrease of calculated $E_{binding}$ values from left to right. This monotonic trend is not obeyed exactly; however, in view of the significant differences (i) and (ii) above, the qualitative agreement is quite striking. Normalized correlation coefficients between the experimental (1) and calculated (2) binding affinity data were calculated following the formula: \begin{equation} r_{12} = s_{12}/(s_1 s_2), \label{corrcoeff} \end{equation} with $s_{12}$ being the covariance of data sets and $s_i$ being the standard deviations of data sets $i$=1,2. The result, correlation coefficients of $r_{12}$=0.979 or 0.909 for uncapped amino acids or dipeptides, respectively, also point to an overall remarkably good agreement. Finally, Figure~\ref{be_exp} also gives predicted $E_{binding}$ values for protonated (overall system charge +2) and deprotonated (overall system charge +1) Asp and Glu, reflecting the significant electrostatic attraction between cations and negatively charged (deprotonated) Asp and Glu side chains. The binding energy data sets are included as Supplementary Table S5. \section*{Usage Notes} The present data contains stationary-point geometries (mainly minima, but also saddle points since no routine normal-mode analysis was performed) on the potential energy surface of the 20 proteinogenic amino acids and dipeptides, either isolated or in complex with a divalent cation (Ca$^{2+}$, Ba$^{2+}$, Sr$^{2+}$, Cd$^{2+}$, Pb$^{2+}$, Hg$^{2+}$). The users of this dataset may find openbabel\cite{joc3_33}(www.openbabel.org) to be a useful tool to convert FHI-aims and xyz files to other common file formats in chemistry. \section*{Author Contributions} MR performed the calculations to assemble all conformers. MR and CB curated the data. Validation calculations by DFAs and correlated methods other than PBE+vdW were carried out by MS. MR, CB, VB designed the study and wrote the data descriptor. \section*{Acknowledgements } The authors are grateful to Matthias Scheffler (Fritz Haber Institute Berlin) for support of this work and stimulating discussions. Luca Ghiringhelli is gratefully acknowledged for his work on the script-based parallel-tempering scheme that is provided with FHI-aims and that was used in the present work. The authors thank Robert Maul and Karsten Hannewald for making available the original alanine geometries derived in their 2007 study for comparison with the present results. The authors further thank Mariana Rossi, Franziska Schubert, and Sucismita Chutia for sharing their extensive experience with all search methods employed in this work. \section*{Competing financial interests } The authors declare no competing financial interests. \clearpage \section*{Figures and Legends} \begin{figure} \includegraphics[width=1.2\textwidth]{./Figure_1_Systems.pdf} \caption{{\bf Molecular systems covered in this study.} Top left and center: Schematic depiction of the backbone conformations of uncharged, zwitterionic, and dipeptide forms of the aminoacids considered in this work. Side chains are indicated by the letter \textsl{\textbf{R}}. Top right: Divalent ions considered for complexation with the 20 proteinogenic amino acids. Lower five rows: Side chains, including different protonation states where applicable, of the 20 proteinogenic amino acids considered in this work. } \label{fig:AA_scheme} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{./Figure_2_Scan_procedure.pdf} \end{center} \caption{{\bf Schematic representation of the workflow} employed to locate stationary points on the potential-energy surfaces of the respective molecular systems.} \label{fig:workflow} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.2\textwidth]{./Figure_3_How_many.pdf} \end{center} \caption{{\bf Numbers of stationary points} of the PBE+vdW potential-energy surface (PES) at the ``tight''/tier-2 level of accuracy that were found for the different \textbf{a)} uncapped amino acids or \textbf{b)} dipeptides in isolation (``bare'') or with a Ca$^{2+}$ cation. Blue segments of the bars and blue shaded numbers give the number of stationary points (``conformers'') located in Step 2 of the search procedure detailed in Figure~\ref{fig:workflow}. Red bar segments and red shading highlight the number of conformers that were additionally found during Step 3 of the search. The total number of conformers found for each system is the sum of the numbers found in steps two and steps three.} \label{fig:HowMany} \end{figure} \begin{figure} \dirtree{% .1 AA-Dataset. .2 Ala. .2 Arg. .2 ArgH. .2 Asn. .2 Asp. .2 AspH. .2 Cys. .3 uncapped. .3 dipeptide. .4 bare. .4 Ba. .4 Ca. .5 \textit{hierarchy\_PBE+vdW\_tier-2.dat}. .5 \textit{control.in}. .5 \textit{conformer.0001.fhiaims}. .5 \textit{conformer.({...}).fhiaims}. .5 \textit{conformer.0001.xyz}. .5 \textit{conformer.({...}).xyz}. .4 Cd. .4 Hg. .4 Pb. .4 Sr. .2 Gln. .2 Glu. .2 GluH. .2 Gly. .2 HisD. .2 HisE. .2 HisH. .2 Ile. .2 Met. .2 Leu. .2 Lys. .2 LysH. .2 Phe. .2 Pro. .2 Ser. .2 Thr. .2 Trp. .2 Tyr. .2 Val. } \caption{{\bf Schematic representation of folder organization of the data.} Each folder, as exemplified for the Ca$^{2+}$-coordinated cysteine dipeptide, contains coordinate files in two formats (standard XYZ and FHI-aims input), the computational settings file for FHI-aims (control.in), and the energy hierarchies (PBE+vdW, “tight”/tier-2 level) per system.} \label{fig:folders} \end{figure} \begin{figure} \includegraphics[width=1\textwidth]{./Figure_5_Ala_results.pdf} \caption{{\bf Comparison of search strategies. } (\textbf{a)}) The conformational energy hierarchies for alanine after the global search and the local refinement together with the reference hierarchy at the DFT-PW91 level that was published by Maul \textit{et al}.\cite{jcc28_1817}. Conformers indicated by black lines were found in the global search, the conformers in red were located only after the local refinement step. The blue line in the reference conformational hierarchy represents a minimum not found in our search and not present at the PBE+vdW level. (\textbf{b)}) Conformations of the alanine molecule. Conformers marked with an asterisk (*) were found in the local refinement step of our search strategy. Atoms are color-coded as follows: Cyan (C), blue (N), red (O), white (H). The conformer labeled with X was found by Maul \emph{et al.} in PW91 calculations\cite{jcc28_1817} but is unstable at the PBE+vdW level. } \label{ala_example} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.0\textwidth]{./Figure_6_MAE-MAD-plots.pdf} \end{center} \caption{{\bf Comparison of different DFAs to MP2 energies.} Mean absolute error (MAE) and maximal error (in meV) between different relative energies at the DFA (PBE+vdW, PBE+MBD*, and PBE0+MBD*) and MP2 level of theory, using structures of obtained minima on the PBE+vdW level from the database for the systems of Ala and Phe with neutral end caps, both in isolation and in complex with a Ca$^{2+}$ cation. Computational details are given in the text. Exact numbers are summarized in Table~\ref{tbl:rmsd-dfa-vs-mp2}. } \label{MP2bench} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{./Figure_7_comp2exp.pdf} \end{center} \caption{{\bf Comparison of the gas-phase binding energies of Ca$^{2+}$ to different amino acids} calculated in this work ($y$ axis) to the experimentally inferred hierarchy of gas-phase binding energies of Ca$^{2+}$ to different amino acids by Ho \emph{et al.}\cite{rcms21_1097} The amino acids are ordered along the $x$ axis from the highest to lowest experimental Ca$^{2+}$ binding energy. Protonated and deprotonated Asp and Glu are not included among the experimental data and are here shown as predictions. $E_\mathrm{binding}$ is high for deprotonated Asp and Glu since these forms of the amino acid would carry a negative charge.} \label{be_exp} \end{figure} \clearpage \section*{Tables} \begin{table} \caption{Mean absolute error (MAE) and maximal error (in meV; in parentheses: in kcal/mol) between different relative energies at the DFA (PBE+vdW, PBE+MBD*, and PBE0+MBD*) and MP2 level of theory, using structures of obtained minima on the PBE+vdW level from the database for the systems of Ala and Phe with neutral end caps, both in isolation and in complex with a Ca$^{2+}$ cation. Computational details are given in the text.} \centering \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{\textbf{System}} & \textbf{MAE} [meV] & \textbf{Maximal error} [meV] \\ \hline \multirow{3}{*}{Ala} & PBE+vdW & 24 (0.5) & 44 (1.0) \\ & PBE+MBD* & 23 (0.5) & 44 (1.0) \\ & PBE0+MBD* & 13 (0.3) & 28 (0.6) \\ \hline \multirow{3}{*}{Phe} & PBE+vdW & 25 (0.6) & 78 (1.8) \\ & PBE+MBD* & 26 (0.6) & 77 (1.8) \\ & PBE0+MBD* & 16 (0.4) & 57 (1.3) \\ \hline \multirow{3}{*}{Ala+Ca$^{2+}$} & PBE+vdW & 17 (0.4) & 23 (0.5) \\ & PBE+MBD* & 15 (0.3) & 22 (0.5) \\ & PBE0+MBD* & 9 (0.2) & 15 (0.3) \\ \hline \multirow{3}{*}{Phe+Ca$^{2+}$} & PBE+vdW & 105 (2.4) & 225 (5.2) \\ & PBE+MBD* & 61 (1.4) & 146 (3.4) \\ & PBE0+MBD* & 50 (1.2) & 104 (2.4) \\ \hline \end{tabular} \label{tbl:rmsd-dfa-vs-mp2} \end{table} \clearpage \noindent Further tables are provided in a Microsoft Excel file and as tab-delimited text files as Supporting Information to this article: \begin{description} \item[Table S1] Parameters specific to the REMD simulations of the different systems: the number of \textsl{Replicas}, the probability of \textsl{Acceptance} as well as the \textsl{Time} between exchange attempts, and the \textsl{Temperature} range of the replicas. \item[Table S2] Number of conformers found in the different stages (after global search and after refinement) of the search scheme for amino acids, dipeptides, and complexes thereof with Ca$^{2+}$ cations. For the amino acids, the basin hopping search was performed starting from the non-zwitterionic as well as from the zwitterionic state. These numbers are separated by a ``+'' in the respective column. \item[Table S3] Numbers of conformers found for the amino acids (AA) and their complexes with the investigated divalent cations. \item[Table S4] Numbers of conformers found for the dipeptides (Dip.) and their complexes with the investigated divalent cations. \item[Table S5a] Extrapolation of SCF energies as proposed by Karton and Martin: $E^{n}_{SCF} = E^{CBS}_{SCF} + A * e^{(-alpha * \sqrt{n})}$ with $n = 3,4,5$; $A$, $alpha$, $E^{CBS}_{SCF}$ to be determined by a least squares fit; perfect fit as $\# parameters = \#datapoints = 3$; all values in eV. \item[Table S5b] Extrapolation of MP2 correlation energies as proposed by Truhlar: $E^{n}_{corr} = E^{CBS}_{corr} + B * n^{-beta}$ with $n = 3,4,5$; $B$, $beta$, $E^{CBS}_{corr}$ to be determined by a least squares fit; perfect fit as perfect fit as $\# parameters = \#datapoints = 3$; all values in eV. \item[Table S6] Basis set superposition errors (BSSE) for SCF and MP2 correlation energies with $ n = T/Q/5$; all values in eV. \item[Table S7] Relative gas-phase Ca$^{2+}$ binding energies for the amino acids from experiments by Ho \textit{et al.}\cite{rcms21_1097} and absolute binding energies in the gas phase from DFT-PBE+vdW calculations for amino acids and dipeptides. \end{description} \clearpage
2004.02931
\section{Introduction} A large portion of the wind energy resource is observed offshore and in waters deeper than 50 meters, where it is not feasible to deploy wind turbines by means of traditional bottom-fixed solutions. Floating offshore wind turbines (FOWTs) are a solution to this problem. However, the non-fixed platform results in additional engineering challenges: In particular, the low-frequency modes associated with the platform rigid-body motion and the additional wave forcing may lead to increased fatigue loads and power oscillations. The majority of FOWTs use a variable-speed variable-pitch (VS-VP) controller based on generator speed feedback for rotor speed and power regulation. For FOWTs, the control objective is to track the nominal power curve in the presence of the additional forcing due to wind and waves disturbances. The presence of rigid-body motion modes associated with the floating platform poses additional constraints to the effectiveness of such a control strategy. As has been widely shown \cite{jonkman2008influence}, the collective pitch controller (CPC) may interact with the platform modes resulting in large motions of the floating structure. This is named the negative damping problem (NDP). In order to prevent it, the CPC bandwidth is decreased, trading the capability of rejecting wind and wave disturbances for lower platform motions. More advanced control strategies have been considered in previous works in order to improve the power tracking capabilities while being subjected to wind and wave disturbances. A model-based linear quadratic regulator (LQR) controller was proposed in \cite{Lemmer_2016}. LQR has been proved to be an effective way to reject wind disturbances. However, it is still ineffective in rejecting wave-induced effects. A more advanced control strategy is non-linear model predictive control (NMPC) \cite{schlipf2013nonlinear}. Using a simplified model of the FOWT and a full preview of the incoming wind and wave disturbances, an optimisation algorithm calculates the optimal control action. NMPC obtains superior performance in terms of power production variation reduction and load reduction. However, the optimisation problem is computationally too demanding to be solved in real-time. While NMPC can be seen as an upper limit for FOWT controller performance, a more simple control logic is required to calculate the control action in real-time control purposes. To improve the wind disturbance rejection capabilities, LIDAR assisted feedforward (FF) control is found to be an effective technology \cite{schlipf2015collective}. A FF action, fed by a measurement of the incoming wind field, is added to the control action of a conventional feedback (FB) controller. This FF control logic attenuates the wind excitation more effectively while preserving the stability and simplicity of the FB controller. The main objective of this work is to develop a similar FF control strategy based on waves, such that wave disturbances can be compensated for in a simple manner and without compromising on realistic implementation. Particularly, care is given to obtaining an accurate linearized model of the FOWT dynamics, based on a real-time preview of the surface elevation. A novel FF controller is formulated based on this linear model and it is shown that adding such a control logic to the standard FB controller improves the performance of the FOWT. To verify the compensation capabilities, this paper focuses on compensating wave-induced rotor speed variations. However, the same procedure can be used to compensate other wave-induced disturbances such as platform pitch motion or tower-base loads. It is already proven that an accuracy surface elevation preview can be obtained \cite{blondel2012reconstruction} by using regular ship radar systems, which are available at relatively low cost. Therefore, in this work, the emphasis is placed on demonstrating the potential of wave-FF control rather than exploiting an implementation of the wave measurement technology. The remainder of this paper is organised as follows. Section 2 introduces the FOWT that is considered for developing the control strategy. Section 3 presents the linear model used to describe the wave-induced dynamics. Section 4 introduces the control law of the novel controller. In Section 5, the reference FOWT is subjected to high-fidelity simulations and the results are discussed. The paper is concluded in Section 6. \section{Case study} The effectiveness of the wave FF controller is demonstrated via a case study. The INNWIND.EU TripleSpar platform concept is used \cite{INNWIND} together with the DTU 10MW wind turbine \cite{10MWDTU}. The DTU VS-VP controller is used as the baseline (BL) FB controller. The dynamic model is reduced to only the most fundamental Degrees of Freedom (DOF), as shown in \cref{fig:DOF}. We consider the platform pitch $\beta_p$, platform surge $x_p$, elastic tower deflection $x_d$ and rotor speed $\Omega$. The controllable inputs are the collective pitch angle $\theta_c$ and the generator torque $\tau_g$. The disturbance inputs are the rotor effective wind speed $v_0$ and the surface elevations $\eta$ at the platform origin. Linear wave kinematics and wave excitation forces were considered for zero-deg wave heading and zero wind-wave misalignment. Wave forces are modelled using potential flow theory. \begin{figure} \centering \includegraphics[width=50mm]{fig-fowt-labels.png} \captionof{figure}{Visualisation of the model used in this case study, containing the degrees of freedom (black), the control inputs (green) and the disturbance inputs (red).} \label{fig:DOF} \end{figure} \section{Linear modelling of the wave disturbance effects} A linear model of the FOWT allows predicting the system dynamics as a function of disturbance/control inputs, apply linear control engineering theory and develop a linear model-based FF controller. \Cref{fig:linear-model} shows a block diagram representation of the linear approximation model $\hat{G}$ used in this work, consisting of three sub-systems: \begin{itemize} \item The linearized FOWT dynamics obtained from Simplified Low-Order Wind turbine (SLOW) by \cite{lemmer2020multibody}. In this work, SLOW computes the plant TFs between the outputs $y$ and the inputs: generator torque $\tau_g$, collective blade pitch angle $\theta_c$, wind speed $v_0$ and wave excitation forces $[F_x^{we},\ M_y^{we}]$. \item A parametric wave excitation model (PWEM), mapping the surface elevations $\eta$ to wave forces $[F_x^{we},\ M_y^{we}]$. \item A surface elevation prediction model that uses the wave elevation measured at time $t$ in point A in front of the FOWT $\eta_A(t)$ to predict the wave elevation at the FOWTs centre of buoyancy at time $t+t_d$, named $\eta_0(t+t_d)$. \end{itemize} The linear model is mainly based on SLOW. For more information on SLOW, the reader is referred to \cite{lemmer2020multibody}. The following subsections derive the PWEM and the wave prediction model. \begin{figure}[h] \centering \includegraphics[width=0.80\linewidth]{fig-crolm-2.pdf} \caption{Block diagram of the linear model and its sub-systems.} \label{fig:linear-model} \end{figure} \subsection{Parametric wave-excitation model} The PWEM describes the wave forces by wave elevation measurements. Wave forces are introduced in the majority of time-domain simulation potential flow models by means of non-parametric frequency-dependent coefficients, obtained from panel code (e.g. WAMIT) pre-calculations. The force coefficients represent a non-causal model \cite{Falnes}. This is exemplified in \cref{fig:causality-problem}. The impulse response of the force coefficients, shown by the black dashed line, results in wave forces at negative times. This occurs because wave force coefficient panel codes consider the wave forces to be caused by the wave elevation at the platform centre of buoyancy. In reality, forces are present as soon as the wave impacts the front face of the structure. In order to make the model causal, its output response has to be delayed by $t_p$ seconds, where $t_p$ is the smallest possible time for which a wave elevation impulse at time $t=0$ does not result in significant forces at negative times. The response of the causalized system is shown in \cref{fig:causality-problem} in blue, with a time delay of $t_p=10$ seconds. \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{fig-time-shift.pdf} \caption{Impulse response of force coefficients, indicating non-causality for the non-delayed force coefficients} \label{fig:causality-problem} \end{figure} Now, a linear time-invariant (LTI) state-space model relating wave elevation to wave forces is obtained from the causalized non-parametric model by means of frequency-domain subspace identification. This parameterization was first proposed in \cite{PWEM} using time-domain system identification. The surge force and pitch moment coefficients of the TripleSpar are visualised in \cref{fig:system-identification} in blue. The identification was carried out by means of the N4SID method implemented in Matlab and resulted in a 9th-order single-input multi-output model, shown in \cref{fig:system-identification}. The fit to estimation is found to be $88\%$ and $96\%$ for the surge force and pitch moment respectively, using Akaike's final prediction error (FPE) method. Moreover, the fit is especially good in the wave typical frequency range, from $1/20$ Hz to $1/3$ Hz. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig-system_identification.pdf} \caption{Subspace identification of the non-parametric wave force coefficients. The result denotes the PWEM.} \label{fig:system-identification} \end{figure} \subsection{Wave prediction model} \label{sec:wave-prediction} The PWEM obtained in the previous section is input-output delayed. Thus, if the wave elevation at the platform location at time $t$ is considered for input, the output wave forces are obtained at time $t-t_p$. To know forces at the current time $t$, the PWEM must be fed with the wave elevation at the future time $t+t_p$. This information can be extracted from the measurement of surface elevation upstream of the FOWT. The transfer function (TF) that relates the upstream wave elevation measurement $\eta_A(t)$ and the wave elevation at platform location at the future time $\eta_0(t+t_p)$ is obtained from the combination of two TFs. The first TF puts into relation the surface elevation at two points at the same time instant in deep water: \begin{equation} H(\omega) = \frac{\eta_0}{\eta_A}\exp{\biggl(-i\frac{\omega^2 L}{g}\biggr)}, \label{eq:place-shift} \end{equation} where $\eta_A$ and $\eta_0$ are the wave elevation at the upstream point and at platform location and $L$ is the distance between the point A and the platform location. The second TF is the expression for a negative time delay $t_p$ in the frequency domain. \begin{equation} P(\omega)=\exp{(i\omega t_p)},\; \label{eq:time-delay} \end{equation} Thus, combining \cref{eq:place-shift} and \cref{eq:time-delay}, the TF between the upstream and platform wave elevations is obtained: \begin{equation} H_p(\omega)=H(\omega)P(\omega) = \exp{\biggl(i \omega \biggl(t_d - \frac{\omega L}{g}\biggr)\biggr)}, \end{equation} Causality is obtained if the argument of the exponent is negative. By substituting the wave period $T$ as $\omega=2\pi/T$, requiring causality for waves with a period up to $\overline{T}$ seconds, the minimal measurement distance $L$ in front of the FOWT denotes: \begin{equation} L \geq \frac{g \overline{T} t_p}{2\pi} \end{equation} For this case study, to predict waves up to a period of $\overline{T}=20$ seconds, for $t_p=10$ seconds in advance requires a minimum measuring distance of $L=313$ meters. \section{Feedforward controller design} \Cref{fig:control-logic} presents the novel control logic in a block diagram. The FOWT plant, named $G$, is controlled by the collective blade pitch angle $\theta_c$ and the generator torque $\tau_g$, based on a measurement of the rotor speed $\Omega$. This control loop is the so-called feedback control loop. Meanwhile, the rotor effective wind speed $v_0$ and surface elevations $\eta_0$ are acting on the same plant $G$, called the disturbances. The description so far is a regular feedback-controlled FOWT block scheme subjected to wind and wave disturbances. A novel FF controller $C_{ff}$ and a wave predictor $\hat{H}_p$ are added to the regular BL feedback controller. The FF controller $C_{ff}$ computes a control action additionally to the feedback controller, by using an upstream measurement of the surface elevation $\eta_A(t)$. The additional control signal will be designed to attenuate the dynamics that are caused by wave disturbances $y^{we}$. A proportional gain $k_{ff}$ is included in the controller to achieve a trade-off between the control objective and the control action. \begin{figure}[h] \centering \includegraphics[width=0.63\linewidth]{fig-wave-ff-5.pdf} \caption{Control logic of the BL controller complemented with the wave-FF controller.} \label{fig:control-logic} \end{figure} The FF controller $C_{ff}$ is designed using the linear approximation of the effect of the wave disturbance $\hat{G}_\eta$, coupled with an inverse $\hat{G}^{-1}_i$. It computes the additional control inputs for the generator torque $\tau_{g,ff}$ and collective pitch angle $\theta_{c,ff}$, using the control law shown in \cref{eq:FF-pitch}. \Cref{fig:control-law} illustrates the general block diagram of this control law. The design compensates the wave-induced effect of two arbitrary system outputs $y_i$ and $y_j$. \begin{equation} u_{ff}(s)= -\underbrace{k_{ff} \cdot \hat{G}_{\eta \to \Omega}(s) \cdot \hat{G}_{u_i \to \Omega }^{-1}(s)}_{C_{ff}(s)} \cdot \eta_{0,p}(s) \label{eq:FF-pitch} \end{equation} \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{fig-ff-controller.pdf} \caption{Block diagram of a general FF controller $C_{ff}$ with the objective to compensate the wave-induced effect of two arbitrary system outputs $y_i$ and $y_j$. $k_{ff}$ denotes the proportional gain, allowing the controller to be tuned less aggressive.} \label{fig:control-law} \end{figure} If the outputs are controllable, the controller can compensate up to two arbitrary system outputs. The performance of the controller depends on the quality of the linear model. To prove the methodology, a FF controller for attenuating rotor-speed variations is designed. When the wind turbine is operating in partial load conditions, the FF controller will act on the generator torque and when in full load conditions on the collective blade pitch angle. For this configuration, the outcome of \cref{eq:FF-pitch} is an 18th-order LTI TF. An example of the Bode magnitude plot is shown in \cref{fig:control-TF} for operating point $\overline{v}_0=8$ m/s. Incident ocean waves typically only contain a significant amount of energy for a period of $3>T\geq20$ s. This frequency-range is highlighted in blue. The controller should only compensate for wave responses in this frequency range. Because the 18th-order LTI TF also is sensitive to waves of lower frequencies, the order of the original controller is appropriately reduced to an 8th order system (red dashed line) with similar properties in the frequency range of interest. Moreover, a high-pass filter is applied to reduce the sensitivity even further. The final controller is a 9th-order LTI TF, shown by the red solid line. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig-controller-reduction.pdf} \caption{Bode plot of the torque controller and the effect of loop shaping, for operating point $\overline{v}_0=8$ m/s.} \label{fig:control-TF} \end{figure} \section{High-fidelity simulations results} \label{sec:results} To evaluate the performance of the controller, high-fidelity simulations are carried out in FAST v8.16 using the environmental conditions indicated in \cref{tab:load-cases}. The performance of the BL controller is compared to the performance of the same controller extended with the wave-FF controller. Moreover, the BL controller is also simulated without waves to provide an upper-performance limit for wave-FF control. The control objective is to reduce rotor speed variation. The performance of the two controllers is compared in terms of rotor speed variance ($\Omega$), mean power production ($P$), mean blade pitch action ($\dot{\theta}_c$), tower-base fatigue damage ($M_{ty}$), blade fatigue damage ($M_{b}$) and low-speed shaft fatigue damage ($M_{lss}$). The fatigue damage is measured by a 1-Hz Damage Equivalent Load (DEL). \begin{table}[h] \caption{Selection of load cases based on their occurrence probability, selected from LIFES50+ (DLC1.2).} \label{tab:load-cases} \centering \begin{tabular}{lllll} \hline & $v_0 $ & Hs & Ts & p \\ & {[}m/s{]} & {[}m{]} & {[}s{]} & {[}-{]} \\ \hline Load case 1 & 5 & 1,4 & 7 & 14\% \\ Load case 2 & 7,1 & 1,7 & 8 & 24\% \\ Load case 3 & 10,3 & 2,2 & 8 & 26\% \\ Load case 4 & 13,9 & 3 & 9,5 & 20\% \\ Load case 5 & 17,9 & 4,3 & 10 & 11\% \\ Load case 6 & 22,1 & 6,2 & 12,5 & 3.8\% \\ Load case 7 & 25 & 8,3 & 12 & 0.74\% \\ \hline & & & & $\approx$100\% \end{tabular} \end{table} The results demonstrate that wave-FF is an effective control strategy to reject wave-induced rotor speed variations for FOWTs. \Cref{fig:results} shows the performance differences between the three configurations for each load case. The Weibull averages over all load cases are shown in \cref{tab:weibull-performance}. The proposed controller reduces the rotor speed variations by $26\%$ with respect to regular BL control, such that the FOWT experiences only $4\%$ more rotor variations compared to operation in still water. These reductions take place in the wave frequency range, considered to be most difficult frequencies to attenuate by e.g. \cite{Lemmer_2016}. As a side-effect of the novel controller, the power production is increased, the tower loads are reduced and the blade loads are reduced. These reductions require moderate additional pitch control action and result in slightly more shaft fatigue. The FF controller is especially effective in severe environmental conditions because wave loads become more dominant over wind loads in more extreme load cases. This effect can be explained as follows: While the rotor thrust force decreases (because the blade pitch angle increases) for large wind speeds, the wave height increases. Therefore, the wave FF controller can reduce a larger percentage of rotor speed variations in severe environmental conditions. \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{fig-performance-per-load-case.pdf} \caption{Performance resulting from the high fidelity simulations for each controller and for each load case.} \label{fig:results} \end{figure} \begin{table}[h] \centering \caption{Performance results, obtained from the high fidelity simulations for each controller and based on the Weibull average.} \label{tab:weibull-performance} \begin{tabular}{l|lll|l} \toprule & BL & BL+FF & BL (no waves) & $\frac{BL+FF}{BL}$ [\%] \\ \midrule mean $P$ [MW] & 6090 & 6103 & 6106 & 0.21\% \\ STD $\Omega$ [RPM] & 0,32 & 0,23 & 0,22 & -26\% \\ mean $\theta_c$ [RPM] & 0,032 & 0,13 & 0,0077 & 290\% \\ DEL $M_{ty}$ [MNm] & 64 & 55 & 12 & -13\% \\ DEL $M_b$ [MNm] & 3,9 & 3,3 & 2,9 & -15\% \\ DEL $M_{LSS}$ [MNm] & 0,55 & 0,57 & 0,42 & 3.4\% \\ \bottomrule \end{tabular} \end{table} \newpage \section{Conclusion and outlook} Based on high-fidelity simulations, it was shown that the novel feedforward (FF) control approach is able to significantly reduce wave-induced rotor speed variations, while indirectly reducing structural loads on the turbine and increase the energy capture. By complementing the regular feedback loop, the controller complexity is only increased by a linear transfer function and the regular stability properties are unaffected. Even though larger control actions are needed for nearly-full compensation, the framework allows a trade-off between stable power production and control input by using a simple proportional gain. Whereas this work uses wave knowledge to regulate the rotor speed, the methodology here presented can be extended to attenuate two arbitrary system outputs, such as rotor speed together with platform pitch motion. Controllability of the control objective should be taken into account, as some control objectives may require large control actions. Future work will include the validation of the proposed technique via a wave basin test. The effects of second-order wave forces, which are found important for the dynamics of a floating offshore wind turbine, will be investigated as well. Furthermore, the effect of measurement errors and wind-wave misalignment will be reviewed. \section{Acknowledgements} This research has been partially funded by European Union through the Marie Sklodowska-Curie Action (Project EDOWE, grant 835901). \section*{References} \bibliographystyle{iopart-num}
1612.02206
\section{Introduction} Density functional theory (DFT) is one of the most widely used methods for performing quantum mechanical analysis of many-body systems. DFT is founded upon two core theorems. The first of these is the Hohenberg-Kohn theorem~\citep{HK1964}, which demonstrates, for ground states, that the many-body wavefunction, the external potential, and the density are uniquely determined by each other: \begin{equation} \label{HK_map} V\rbr{\mbf{r},\mbf{r}_{2},\ldots,\mbf{r}_{N}} \rightleftharpoons \psi\rbr{\mbf{r},\mbf{r}_{2},\ldots,\mbf{r}_{N}} \rightleftharpoons \rho\ensuremath{\rbr{\mbf{r}}}. \end{equation} Therefore, wavefunctions, potentials, and expectation values of any operator can, in principle, be written as functionals of the ground-state density. The Hohenberg-Kohn theorem applies for any given strength of the interaction between the particles. Thus, in the second core theorem of DFT, Kohn and Sham recognised that the many-body system of interacting particles can be described by an auxiliary system of \textit{non-interacting} particles, in a different external potential (the Kohn-Sham potential), that produces the same ground-state density~\citep{KS1965}. Since the Kohn-Sham particles are non-interacting, the wavefunction for this system is composed of single-particle orbitals, found by solving a system of single-particle equations, the Kohn-Sham equations. The solution of these equations thus provides a method to obtain the many-body ground state density that bypasses the many-body wavefunction (the Kohn-Sham scheme)~\citep{KS1965}. These two theorems are sufficient to construct DFT in a formal way; however, there are open questions with regards to both of them. Although the Hohenberg-Kohn theorem guarantees a one-to-one relationship between potentials and ground-state wavefunctions, as well as ground-state wavefunctions and ground-state densities, it offers no prescription on how these wavefunctions or potentials are produced given a particular density. For the Kohn-Sham scheme, although it is known that the Kohn-Sham potential is constructed from the sum of external, Hartree, and exchange-correlation potentials, the exchange-correlation component is generally unknown and hence must be approximated when DFT calculations are implemented practically. There are numerous approximations to the exchange-correlation potential that cover a wide range of sophistication and complexity~\citep{Burke2012}, and the suitability of an approximation usually depends on the problem studied. In this work, we apply the metric space approach to quantum mechanics~\citep{D'Amico2011,Sharp2014,Sharp2015} to potentials in order to gain insight into the two fundamental theorems of DFT. First, we use the general procedure from Ref.~\citep{Sharp2014} to derive two metrics for external potentials. These metrics will complement the metrics for wavefunctions and densities derived in Ref.~\citep{D'Amico2011} and ensure that we have metrics for each of the fundamental physical quantities associated to DFT. We will then revisit the Hohenberg-Kohn theorem. This was first studied with the metric space approach to quantum mechanics in Ref.~\citep{D'Amico2011}, where only the second part of Eq.~(\ref{HK_map}), concerning ground-state wavefunctions and densities, was studied. Now, with the external potential metrics, we will extend the study to incorporate the first part of Eq.~(\ref{HK_map}), which establishes a unique map between the external potential and the ground-state wavefunction. We will then turn our attention to the Kohn-Sham scheme. By studying model systems for which the Kohn-Sham quantities can be determined exactly, we will use our metrics to quantify the differences between many-body and Kohn-Sham quantities. We will use atomic units $\rbr{\hbar=m_{e}=e=1/4\pi\epsilon_{0}=1}$ throughout this paper. \section{Deriving Metrics for Potentials}\label{sec:metric} In order to derive a metric for external potentials, we use the metric space approach to quantum mechanics~\citep{D'Amico2011,Sharp2014,Sharp2015}, which allows us to derive metrics from conservation laws of the form \begin{equation} \label{conservation} \int\abs{f\rbr{\mbf{x}}}^{p} d\mbf{x} = c, \end{equation} where $c$ is a finite, positive constant. Equation~(\ref{conservation}) has the form of an $L^p$ norm, from which a metric can be derived in a standard way. As these metrics then naturally descend from the physical conservation laws, we refer to them as ``natural'' metrics for the related physical functions. A metric is a function that assigns a distance between two elements of a set and is subject to the axioms~\citep{Sutherland2009,Megginson1998} \begin{align} D\rbr{x,y} &\geqslant 0\ \text{and}\ D\rbr{x,y}=0 \iff x=y, \label{axiom1}\\ D\rbr{x,y} &= D\rbr{y,x}, \label{axiom2}\\ D\rbr{x,y} &\leqslant D\rbr{x,z}+D\rbr{z,y}, \label{axiom3} \end{align} for all elements $x,y,z$ in the set. A set with an appropriate metric defined on it is called a metric space. In time-independent quantum mechanics, the system energy is conserved and it is given by the expectation value \begin{equation} \label{energy_cons0} \int\ldots\int\psi^{*}\rbr{\ensuremath{\mbf{r}}_{1},\ldots,\ensuremath{\mbf{r}}_{N}}\hat{H}\psi\rbr{\ensuremath{\mbf{r}}_{1},\ldots,\ensuremath{\mbf{r}}_{N}} d\mbf{r}_{1}\ldots d\mbf{r}_{N} = EN, \end{equation} where \begin{equation} \label{hamiltonian} \hat{H}= -\sum_{i=1}^{N}\frac{1}{2}\nabla_{i}^{2} + \sum_{j<i}^{N} U\rbr{\ensuremath{\mbf{r}}_{i},\ensuremath{\mbf{r}}_{j}} + \sum_{i=1}^{N} v\rbr{\mbf{r}_{i}}, \end{equation} is the system Hamiltonian, where $V=\sum_{i=1}^{N} v\rbr{\mbf{r}_{i}}$ is the external potential and $\psi\rbr{\ensuremath{\mbf{r}}_{1},\ldots,\ensuremath{\mbf{r}}_{N}}$ is the system state. We have followed Ref.~\citep{D'Amico2011} and normalised the many-body wavefunction $\psi\rbr{\ensuremath{\mbf{r}}_{1},\ldots,\ensuremath{\mbf{r}}_{N}}$ to the particle number $N$. In the following we will concentrate on the Coulomb particle-particle interaction $U\rbr{\ensuremath{\mbf{r}}_{i},\ensuremath{\mbf{r}}_{j}}=1/|\ensuremath{\mbf{r}}_{i}-\ensuremath{\mbf{r}}_{j}|$, though the results are valid for a general form of $U\rbr{\ensuremath{\mbf{r}}_{i},\ensuremath{\mbf{r}}_{j}}$. In Eq.~(\ref{hamiltonian}) and the following analysis we focus on electronic systems, as is often done in studies involving DFT when invoking the Born-Oppenheimer approximation. However, our results can be extended to include nuclear terms in the Hamiltonian, which we demonstrate in the Appendix. The derivations in the Appendix can be straightforwardly extended to more complex systems comprising various particles and/or species, such as systems including electrons and different ionic species. We will now derive metrics for the external potential from Eq.~(\ref{energy_cons0}) by applying the metric space approach to quantum mechanics. We start by performing some simple algebra and rewrite Eq.~(\ref{energy_cons0}) in the following two forms: \begin{align} \label{energy_cons1} \int&\ldots\int \sum_{i=1}^{N}{\sbr{-\frac{1}{2}\psi^{*}\nabla_{i}^{2}\psi+\sum_{j<i}^{N}{\frac{\abs{\psi}^{2}}{\abs{\mbf{r}_{i}-\mbf{r}_{j}}}}+\abs{\psi}^{2}v\rbr{\mbf{r}_{i}}}}\nonumber\\ &\times \ensuremath{d\mbf{r}}_{1}\ldots d\mbf{r}_{N} = EN \end{align} and \begin{equation}\label{energy_cons2} \int N\sbr{\tau\rbr{\mbf{r}}+ \frac{1}{2}\int \ensuremath{d\mbf{r}}_{1}\frac{g\rbr{\mbf{r},\mbf{r}_{1}}}{\abs{\mbf{r}-\mbf{r}_{1}}}+v\rbr{\mbf{r}}\rho\rbr{\mbf{r}}}\ensuremath{d\mbf{r}}=EN. \end{equation} Here, we have used the definitions \begin{equation} \tau\rbr{\mbf{r}}\equiv \frac{1}{2}\int\ldots\int\abs{\nabla_{\mbf{r}}\psi\rbr{\mbf{r},\mbf{r}_{2},\ldots,\mbf{r}_{N}}}^{2}\ensuremath{d\mbf{r}}_{2}\ldots d\mbf{r}_{N}\geqslant0\label{kinetic_density}\\ \end{equation} for the kinetic energy density, \begin{equation} g\rbr{\mbf{r}_{1},\mbf{r}_{2}}\equiv\rbr{N-1}\int\ldots\int\abs{\psi\rbr{\mbf{r}_{1},\mbf{r}_{2},\ldots,\mbf{r}_{N}}}^{2}\ensuremath{d\mbf{r}}_{3}\ldots d\mbf{r}_{N}\geqslant0\label{2_part_corr}\\ \end{equation} for the two-particle correlation function, and \begin{equation} \rho\rbr{\mbf{r}}\equiv \int\ldots\int\abs{\psi\rbr{\mbf{r},\mbf{r}_{2},\ldots,\mbf{r}_{N}}}^{2}\ensuremath{d\mbf{r}}_{2}\ldots d\mbf{r}_{N}\geqslant0,\label{1_part_density}\\ \end{equation} for the single-particle density. To derive Eq.~(\ref{kinetic_density}), we have used that for any $i=1\ldots N$ \begin{align} \label{ke_relation} -\frac{1}{2}\int\psi^{*}\nabla_{i}^{2}\psi\ensuremath{d\mbf{r}}_{i}&=-\frac{1}{2}\sbr{\psi^{*}\nabla_{i}\psi}_{\ensuremath{\mbf{r}}_{i}\rightarrow\infty}+\frac{1}{2}\int\sbr{\rbr{\nabla_{i}\psi^{*}}\cdot\rbr{\nabla_{i}\psi}}\ensuremath{d\mbf{r}}_{i}\nonumber\\ &=\frac{1}{2}\int\abs{\nabla_{i}\psi}^{2} \ensuremath{d\mbf{r}}_{i}, \end{align} as $\psi\rightarrow 0$ when $\ensuremath{\mbf{r}}_{i}\rightarrow\infty$. This also shows that the kinetic term in Eq.~(\ref{energy_cons1}) is positive. To derive ``natural'' metrics, we must ensure that the conservation laws Eqs.~(\ref{energy_cons1}) and~(\ref{energy_cons2}) can be written in the form of Eq.~(\ref{conservation}), so, after taking the absolute value of their left and right sides, we need to demonstrate that the integrands in their left-hand sides always have the same sign throughout the corresponding domains. From previous considerations, the parts of these integrands corresponding to the kinetic and particle-particle interaction terms, for both Eqs.~(\ref{energy_cons1}) and~(\ref{energy_cons2}), are positive semi-definite everywhere, so we need only to consider the external potential term. Although we cannot guarantee the sign of $v\rbr{\ensuremath{\mbf{r}}}$, we can make use of a gauge transformation. If the potential is modified by a constant, $v\rbr{\ensuremath{\mbf{r}}} \rightarrow v\rbr{\ensuremath{\mbf{r}}}+c$, then the solution to the Schr\"{o}dinger equation is unaffected. Thus, for potentials with a lower bound, we can choose a constant $c$ such that the potential term (and hence the overall integrand) in Eqs.~(\ref{energy_cons1}) and~(\ref{energy_cons2}) is positive semi-definite everywhere \footnote{We will consider the important case of a bare, attractive Coulomb potential in Sec.~\ref{sec:coulomb}}. With this in mind we can rewrite Eqs.~(\ref{energy_cons1}) and (\ref{energy_cons2}) as \begin{align} \label{pot_norm1} \int&\ldots\int\abs{\sum_{i=1}^{N}{\sbr{\frac{1}{2}\abs{\nabla_{i}\psi}^2+\sum_{j<i}^{N}{\frac{\abs{\psi}^{2}}{\abs{\mbf{r}_{i}-\mbf{r}_{j}}}}+\abs{\psi}^{2}\sbr{v\rbr{\mbf{r}_{i}}+c}}}}\nonumber\\ &\times\ensuremath{d\mbf{r}}_{1}\ldots d\mbf{r}_{N} = \abs{\rbr{E+c}N}, \end{align} and \begin{align} \label{pot_norm2} \int&\abs{N\sbr{\tau\rbr{\mbf{r}}+\frac{1}{2}\int\ensuremath{d\mbf{r}}_{1}\frac{g\rbr{\mbf{r},\mbf{r}_{1}}}{\abs{\mbf{r}-\mbf{r}_{1}}}+\sbr{v\rbr{\mbf{r}}+c}\rho\rbr{\mbf{r}}}}\ensuremath{d\mbf{r}}\nonumber\\ &=\abs{\rbr{E+c}N}. \end{align} Given that both Eq.~(\ref{pot_norm1}) and Eq.~(\ref{pot_norm2}) are of the sought form~(\ref{conservation}), we can apply the metric space approach to quantum mechanics~\citep{Sharp2014} and derive the corresponding metrics, which read \begin{align} &D_{v_{1}}=\int\ldots\int\abs{f_{1}-f_{2}} \ensuremath{d\mbf{r}}_{1}\ldots d\mbf{r}_{N},\label{pot_metric1}\\ &D_{v_{2}}=\int\abs{h_{1}-h_{2}}\ensuremath{d\mbf{r}},\label{pot_metric2} \end{align} where \begin{align} f&\rbr{\mbf{r}_{1},\ldots,\mbf{r}_{N}}\nonumber\\ &\equiv\sum_{i=1}^{N}\set{\frac{1}{2}\abs{\nabla_{i}\psi}^2+\sum_{j<i}^{N}{\frac{\abs{\psi}^{2}}{\abs{\mbf{r}_{i}-\mbf{r}_{j}}}}+\abs{\psi}^{2}\sbr{v\rbr{\mbf{r}_{i}}+c}}, \end{align} and \begin{equation} \label{h_r} h\rbr{\mbf{r}}\equiv N\sbr{\tau\rbr{\mbf{r}}+\frac{1}{2}\int\ensuremath{d\mbf{r}}_{1}\frac{g\rbr{\mbf{r},\mbf{r}_{1}}}{\abs{\mbf{r}-\mbf{r}_{1}}}+\sbr{v\rbr{\mbf{r}}+c}\rho\rbr{\mbf{r}}}. \end{equation} $D_{v_{1}}$ and $D_{v_{2}}$ apply to both the case in which the system is in an eigenstate and when a more general system state is considered, as demonstrated below. We note that both $\tau\rbr{\mbf{r}}$ and $g\rbr{\mbf{r},\mbf{r}_{1}}$ are uniquely defined by the many-body wavefunction, $\psi\rbr{\ensuremath{\mbf{r}}_{1},\ldots,\ensuremath{\mbf{r}}_{N}}$. When the system is in an eigenstate, and for a given particle number and many-body interaction, the time-independent Schr\"{o}dinger equation shows that the many-body wavefunction is uniquely determined by the external potential $v\rbr{\mbf{r}}$. Hence, every term in the integrands of both Eq.~(\ref{pot_norm1}) and Eq.~(\ref{pot_norm2}) (and hence in the related metrics) can be uniquely written as a functional of the external potential so that $f=f\sbr{v}$ and $h=h\sbr{v}$. This demonstrates that Eqs.~(\ref{pot_norm1}) and~(\ref{pot_norm2}) indeed define two norms (and hence metrics) for the external potential $v\rbr{\mbf{r}}$. It is simple to show that, when comparing the same two systems, $D_{v_{2}}<D_{v_{1}}$. We note that the metric $D_{v_{2}}$ is well defined for comparing systems with different numbers of particles because it relies on a single-particle quantity, the function $h\ensuremath{\rbr{\mbf{r}}}$ defined in Eq.~(\ref{h_r}). The metric $D_{v_{1}}$ instead is well defined here only for systems with the same number of particles, $N_{1}=N_{2}$. The issue of defining $D_{v_{1}}$ for systems with different numbers of particles is an open problem related to the fact that the wavefunction is a many-particle quantity. This issue has been discussed previously with reference to $D_{\psi}$~\citep{Arthacho2011,D'Amico2011b}. When considering a system with a \textit{time-independent} Hamiltonian but not in an eigenstate, conservation of energy applies to the time evolution of this state. In this case we can still consider the norms (\ref{pot_norm1}) and (\ref{pot_norm2}) as derived from the conservation of energy. However, now the system state at any time $t$, $\psi\rbr{t}$, will still be determined by the external potential $v\rbr{\mbf{r}}$, but together with the initial condition $\psi\rbr{t=0}$. The norms (\ref{pot_norm1}) and (\ref{pot_norm2}) will then still represent norms for the external potential $v\rbr{\mbf{r}}$, and at any time $t$, but {\it given the initial state $\psi\rbr{t=0}$}. This condition mirrors the condition for uniqueness of the relationship between the potential and the wavefunction $v\rbr{t}\longleftrightarrow\psi\rbr{t}$ as set in the core theorems of Time-Dependent DFT~\citep{Ullrich2013}, where indeed this uniqueness is subject to the specific initial condition. Given this caveat, we can also in this case use Eqs.~(\ref{pot_norm1}) and~(\ref{pot_norm2}) to derive appropriate metrics for the external potential in the way presented above. \subsection{Potential metric for eigenstates} \label{sec:eigenstates} For system eigenstates, Eq.~(\ref{energy_cons0}) becomes \begin{equation} \label{energy_cons_es} \int\ldots\int E_{i}\abs{\psi_{i}\rbr{\mbf{r}_{1},\ldots,\mbf{r}_{N}}}^2 d\mbf{r}_{1}\ldots d\mbf{r}_{N} = E_{i}N. \end{equation} The norms for the external potential can then be rewritten as \begin{align} \int\ldots\int \abs{\rbr{E_i+c}\abs{\psi_i}^{2}}\ensuremath{d\mbf{r}}_{1}\ldots d\mbf{r}_{N} = \abs{\rbr{E_i+c}N}, \label{pot_norm1es}\\ \int\abs{\rbr{E_i+c}\rho_{i}\rbr{\mbf{r}}}\ensuremath{d\mbf{r}} = \abs{\rbr{E_{i}+c}N}.\label{pot_norm2es} \end{align} From here the metrics for the external potential become \begin{align} D_{v_{1}}=&\int\ldots\int\abs{\rbr{E_{1_{i}}+c_{1}}\abs{\psi_{1_{i}}}^{2}-\rbr{E_{2_{j}}+c_{2}}\abs{\psi_{2_{j}}}^{2}}\nonumber\\ &\times\ensuremath{d\mbf{r}}_{1}\ldots d\mbf{r}_{N}, \label{pot_metric1es}\\ D_{v_{2}}=&\int\abs{\rbr{E_{1_{i}}+c_{1}}\rho_{1_{i}}\rbr{\mbf{r}}-\rbr{E_{2_{j}}+c_{2}}\rho_{2_{j}}\rbr{\mbf{r}}}\ensuremath{d\mbf{r}}\label{pot_metric2es}. \end{align} \subsection{Coulomb External Potentials} \label{sec:coulomb} Often bare Coulomb potentials are replaced by softened potentials that are finite at $r=0$. One example is the modelling of one-dimensional quantum systems~\citep{Javanainen1988,Elliott2012}. When considering softened Coulomb potentials the external potential metrics defined above in Eqs.~(\ref{pot_metric1}) and~(\ref{pot_metric2}) are well defined. However, when the external potential has the bare Coulomb form $v=-1/r$, it diverges to $-\infty$ as $r\rightarrow 0$. This implies that, if $\psi\rbr{\mbf{r}_{1},\ldots,\mbf{r}_{i}=0,\ldots,\mbf{r}_{N}}\neq0$ for at least one value of $i$ and $\rho\rbr{0}\neq0$, it does not seem possible for a gauge transformation to enable the integrand of the potential norms~(\ref{pot_norm1}) and~(\ref{pot_norm2}), respectively, to be positive semi-definite everywhere. We show below that, even in this case, the potential norms~(\ref{pot_norm1}) and~(\ref{pot_norm2}) instead remain well defined. Let us consider the gauge transformation $v\rbr{\mbf{r}}\rightarrow v\rbr{\mbf{r}}+c$ and rewrite Eq.~(\ref{energy_cons1}) using that $\psi=\sum_{i}d_{i}\psi_{i}$, where $\set{\psi_{i}}$ are the eigenstates of $H$, and that $H\psi_{i}=E_{i}\psi_{i}$. Equation~(\ref{energy_cons1}) then becomes \begin{align} \label{energy_cons_es_tot} \int&\ldots\int\sum_{i}\rbr{E_{i}+c}\abs{d_{i}}^2\abs{\psi_{i}\rbr{\ensuremath{\mbf{r}}_{1},\ldots,\ensuremath{\mbf{r}}_{N}}}^2 d\mbf{r}_{1}\ldots d\mbf{r}_{N}\nonumber\\ &=\rbr{E+c}N. \end{align} Equation~(\ref{energy_cons_es_tot}) shows that, as long as $\abs{E_{i}}<\infty$ for any $i$, we can choose a finite $c>0$ such that the integrand in Eq.~(\ref{energy_cons_es_tot}) is positive semi-definite everywhere, even when $v\rbr{\mbf{r}}$, as for the bare Coulomb potential, is not bounded from below. \section{Gauge Freedom and Physical Considerations} In Sec.~\ref{sec:metric}, we demonstrated that a gauge transformation is necessary in order to ensure that the metrics~(\ref{pot_metric1}) and~(\ref{pot_metric2}) are well defined. The gauge must ensure that the integrands in Eqs.~(\ref{energy_cons1}) and~(\ref{energy_cons2}), respectively, are positive semi-definite everywhere, but one could make different choices of gauge once this condition is fulfilled. The gauge freedom we are considering reflects the fact that energies are defined up to a constant; however, energy differences have physical significance: When considering problems where it is necessary that the (physical) difference in energy between the systems we are comparing is preserved, we must ensure that we always work in the same gauge for all systems of interest. Hence, the constant $c$ should be the same for \emph{all} of the external potentials that we consider. In fact, from Eqs.~(\ref{pot_norm1}) and~(\ref{pot_norm2}) we see that in this way the energy of each system is modified by the same amount, and hence the energy difference between any two systems remains unaffected. For $c$ to satisfy this condition, it must be sufficiently large so that the integrand of Eq.~(\ref{energy_cons1}) or Eq.~(\ref{energy_cons2}) is positive semi-definite everywhere for \emph{all} of the potentials characterising the set of systems $\set{S_{n}}$ under consideration. This condition is satisfied for any $c\geqslant\bar{c}_{1\rbr{2}}$, with $\bar{c}_{1}$ and $\bar{c}_{2}$ defined as \begin{align} \bar{c}_{1}\equiv\min\{&c\in\mathbb{R}\text{ s.t. }f\rbr{\mbf{r}_{1},\ldots,\mbf{r}_{N}}\geqslant 0,\nonumber \\ &\forall \set{\mbf{r}_{1},\dots,\mbf{r}_{N}}\text{ and }\forall\ S~\in\set{S_n}\},\label{cmin1}\\ \bar{c}_{2}\equiv\min\{&c\in\mathbb{R}\text{ s.t. }h\rbr{\mbf{r}}\geqslant 0,\forall\ \mbf{r}\text{ and }\forall\ S~\in\set{S_n}\},\label{cmin2} \end{align} for the metrics $D_{v_{1}}$ and $D_{v_{2}}$ respectively. \section{Model Systems} In order to assess the performance of the potential metrics $D_{v_{1}}$ and $D_{v_{2}}$ and examine the two core theorems of DFT, we will study model systems for which we can obtain both the many-body and exact Kohn-Sham quantities with high accuracy. Since it is possible to reverse engineer the Kohn-Sham equations exactly for systems of two electrons~\citep{Perdew1982,Laufer1986,Filippi1994}, we will study two-electron model systems, namely, Hooke's atom and the Helium atom. Their Hamiltonians are \begin{align} \hat{H}_{HA}&=\frac{1}{2}\rbr{\mbf{p}_{1}^{2}+\omega^{2}r_{1}^{2}+\mbf{p}_{2}^{2}+\omega^{2}r_{2}^{2}}+\frac{1}{\abs{\ensuremath{\mbf{r}}_{1}-\ensuremath{\mbf{r}}_{2}}},\\ \hat{H}_{He}&=\frac{1}{2}\mbf{p}_{1}^{2}-\frac{Z}{r_{1}}+\frac{1}{2}\mbf{p}_{2}^{2}-\frac{Z}{r_{2}}+\frac{1}{\abs{\ensuremath{\mbf{r}}_{1}-\ensuremath{\mbf{r}}_{2}}}. \end{align} Hooke's atom can be solved exactly for particular frequencies via the method of Ref.~\citep{Taut1993}, and numerical solutions for all frequencies can be found by the methods of Ref.~\citep{Coe2008}. We solve the Helium atom with the variational method~\citep{Accad1971,Coe2009}. For our purposes, we need a basis set that will allow us to obtain the ground state for any entry in the Helium isoelectronic series, i.e., two-electron ions with any nuclear charge $Z$. The basis set chosen is \begin{equation} \label{Helium_basis} \chi_{ijk}\rbr{\ensuremath{\mbf{r}}_{1},\ensuremath{\mbf{r}}_{2}}=c_{ijk}N_{ijk}L_{i}^{\rbr{2}}\rbr{2Zr_{1}}L_{j}^{\rbr{2}}\rbr{2Zr_{2}}P_{k}\rbr{\cos{\theta}}, \end{equation} with \begin{equation} N_{ijk}=\sqrt{\frac{1}{\rbr{i+1}\rbr{i+2}}}\sqrt{\frac{1}{\rbr{j+1}\rbr{j+2}}}\sqrt{\frac{2k+1}{2}}, \end{equation} where $L_{n}^{\rbr{2}}$ are the generalised Laguerre polynomials, $P_{n}$ are Legendre polynomials, and $\theta$ is the angle between $r_{1}$ and $r_{2}$. The wavefunction for the Helium atom is then \begin{align} \label{Helium_wave} \psi\rbr{\ensuremath{\mbf{r}}_{1},\ensuremath{\mbf{r}}_{2}}=&\frac{1}{\sqrt{8}\pi}e^{-Z\rbr{r_{1}+r_{2}}}\sum_{i,j,k}^{i+j+k\leqslant\Omega}\chi_{ijk}\rbr{\ensuremath{\mbf{r}}_{1},\ensuremath{\mbf{r}}_{2}}, \end{align} where the parameter $\Omega$ controls the number of basis functions~\citep{Accad1971}. This choice of basis combines the approaches taken by Accad \textit{et al.}~\citep{Accad1971} and Coe \textit{et al.}~\citep{Coe2009}. It has the important advantages that, with the constants $N_{ijk}$, basis functions are orthonormal and separable in the three coordinates $\rbr{2Zr_{1},2Zr_{2},\cos{\theta}}$. These coordinates are chosen so that the basis function with $i,j,k=0$ corresponds to the ground state of a hydrogen-like atom of charge $Z$. This basis function always makes the largest contribution to the ground state (i.e., $c_{000}>>c_{ijk}$), particularly for large $Z$, and hence enables the ground state to converge more rapidly with respect to the number of basis functions. For both model systems, we will generate families of states for the metric analysis by varying a parameter in the external potentials of our systems. For Hooke's atom, we will vary the strength of the harmonic confinement via the frequency $\omega$, and for the Helium-like atoms we will vary the nuclear charge $Z$. \subsection{Solving the Kohn-Sham Equations for the Model Systems} \label{calc_ks} In order to be able to apply our metrics to quantities in the exact Kohn-Sham picture, we must be able to solve the Kohn-Sham equations exactly. Since the exact Kohn-Sham equations must reproduce the density from the many-body picture, we can use the exact density to reverse engineer the Kohn-Sham equations. For our model systems, the ground state is a spin singlet. Therefore, in the Kohn-Sham picture, both electrons are described by the same Kohn-Sham orbital and, thus, are expressed in terms of the exact density as~\citep{Filippi1994} \begin{equation} \label{KS_orbital} \phi_{KS}=\sqrt{\frac{\rho\ensuremath{\rbr{\mbf{r}}}}{2}}. \end{equation} The Kohn-Sham potential follows as~\citep{Filippi1994} \begin{equation} \label{KS_potential} v_{KS}\ensuremath{\rbr{\mbf{r}}}=\epsilon_{KS}+\frac{1}{2}\frac{\nabla^{2}\phi_{KS}}{\phi_{KS}}. \end{equation} In order to obtain $v_{KS}\ensuremath{\rbr{\mbf{r}}}$ from Eq.~(\ref{KS_potential}), we require the value of the Kohn-Sham eigenvalue, $\epsilon_{KS}$. Reference~\citep{Perdew1982} demonstrated that, provided $v_{xc}\ensuremath{\rbr{\mbf{r}}}\rightarrow 0$ as $\ensuremath{\mbf{r}}\rightarrow\infty$, the eigenvalue of the highest occupied Kohn-Sham state is equal to the ionisation energy of the system. For our model systems, only one Kohn-Sham state is occupied, and thus the eigenvalues for both electrons are equal to the ionisation energy. For Hooke's atom, when decomposed into centre-of-mass and relative motion components~\citep{Taut1993}, the centre-of-mass energy is identical to that of a one-electron harmonic oscillator of frequency $2\omega$, so the ionisation energy is clearly equal to the relative motion energy~\citep{Laufer1986,Filippi1994}. Ionising an electron from any entry in the Helium isoelectronic series results in a Hydrogenic atom with energy $-Z^{2}/2$ Hartrees. Therefore, the ionisation energy is found from the difference between the Helium and the Hydrogen ground-state energies. In order to apply our metrics to Kohn-Sham quantities, we need to consider the Hamiltonian of the whole $N$-particle Kohn-Sham system. The corresponding Schr\"{o}dinger equation is simply the sum of the Kohn-Sham equations for each electron, so the wavefunction is formed by taking the Slater determinant of the Kohn-Sham orbitals: \begin{align} \label{KS_slater} \psi_{KS}\rbr{\ensuremath{\mbf{r}}_{1},\ensuremath{\mbf{r}}_{2}}&= \begin{vmatrix} \phi_{KS}\rbr{\ensuremath{\mbf{r}}_{1}}\uparrow_{1} & \phi_{KS}\rbr{\ensuremath{\mbf{r}}_{2}}\uparrow_{2}\\ \phi_{KS}\rbr{\ensuremath{\mbf{r}}_{1}}\downarrow_{1} & \phi_{KS}\rbr{\ensuremath{\mbf{r}}_{2}}\downarrow_{2} \end{vmatrix},\nonumber\\ &=\phi_{KS}\rbr{\ensuremath{\mbf{r}}_{1}}\phi_{KS}\rbr{\ensuremath{\mbf{r}}_{2}}\rbr{\uparrow_{1}\downarrow_{2}-\downarrow_{1}\uparrow_{2}}. \end{align} We consider only the orbital part of the wavefunction in this paper, so the two-electron Kohn-Sham wavefunction simplifies to \begin{align} \label{KS_two_e_wavefunction} \psi_{KS}\rbr{\ensuremath{\mbf{r}}_{1},\ensuremath{\mbf{r}}_{2}}&=\phi_{KS}\rbr{\ensuremath{\mbf{r}}_{1}}\phi_{KS}\rbr{\ensuremath{\mbf{r}}_{2}}\nonumber\\ &=\frac{1}{2}\sqrt{\rho\rbr{\ensuremath{\mbf{r}}_{1}}\rho\rbr{\ensuremath{\mbf{r}}_{2}}}. \end{align} The potential for the two Kohn-Sham electrons' Hamiltonian is given by the sum of the single-particle Kohn-Sham potentials, \begin{equation} \label{KS_two_e_potential} V_{KS}\rbr{\ensuremath{\mbf{r}}_{1},\ensuremath{\mbf{r}}_{2}}=v_{KS}\rbr{\ensuremath{\mbf{r}}_{1}}+v_{KS}\rbr{\ensuremath{\mbf{r}}_{2}}. \end{equation} We will apply our metrics to these two-electron Kohn-Sham quantities. Equation~(\ref{KS_two_e_wavefunction}) shows that for a Kohn-Sham system the metrics $D_{v_{1}}$ and $D_{v_{2}}$ will, in general, take on different values. \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{compare_metrics.pdf} \caption{(Color online) The wavefunction, density, and potential distances for many-body systems [(a) and (b)] and Kohn-Sham systems [(c) and (d)] are plotted against the nuclear charge for Helium-like atoms (left), and against the confinement frequency for Hooke's atom (right). For Helium-like atoms the reference state is $Z=50.0$, and for Hooke's atom the reference state is $\omega=0.5$. All of the metrics are scaled such that their maximum value is $2$.} \label{compare_metrics} \end{center} \end{figure*} \section{Comparison of Metrics for Characterising Quantum Systems} \begin{figure*}[t] \includegraphics[width=\textwidth]{overlap_metrics.pdf} \caption{(Color online) Plots of rescaled potential distance $2D_{v_{1}}/[N(E_1+E_2)]$ (top) and $2D_{v_{2}}/[N(E_1+E_2)]$ (bottom) against rescaled wavefunction distance $D_{\psi}/\sqrt{N}$ [(a) and (c)] and against rescaled density distance $D_{\rho}/N$ [(b) and (d)]. We have plotted both the many-body and related Kohn-Sham systems for Helium-like atoms and Hooke's atom. In each panel we consider families of systems characterised by increasing and decreasing parameters starting from the reference state ($Z=50.0$ for Helium-like atoms, $\omega=0.5$ for Hooke's atom). The parameter ranges are $1.0<Z<2000.0$ for Helium-like atoms, and $2.6\times10^{-8}<\omega<1000.0$ for Hooke's atom.} \label{hk_dv} \end{figure*} Within the metric space approach to quantum mechanics, we now have metrics for wavefunctions, densities, and potentials. For systems subject only to scalar potentials and with a given many-body interaction, these quantities, taken together, fully characterise a many-body system. We are then, in principle, in the position of \textit{quantitatively} answering the following questions. Are two many-body systems close to each other in the Hilbert space? Could two many-body systems be close to each other with respect to some of these quantities but far away for others? We will address these questions, at least for the systems at hand and with a focus on DFT, in the rest of the paper: Apart from the general interest, these questions have practical implications, for example when considering how closely quantum information processes reproduce the desired result~\cite{Nielsen2000} or assessing the effectiveness of convergence loops in codes aiming to determine numerically accurate properties of systems, such as DFT codes. When considering ground states, thanks to the Hohenberg-Kohn theorem, any among the density, wavefunction, and external potential are equally appropriate for characterising quantum systems subject to external scalar potentials. Therefore, it is worthwhile to make a comparison between the information given by each of the corresponding metrics. Figure~\ref{compare_metrics} shows the values of the wavefunction, density, and both potential metrics plotted against the parameter values for both of our model systems and considering both many-body (top panels) and Kohn-Sham (bottom panels) quantities. The distances are calculated with respect to a reference state, $Z=50.0$ for the Helium-like atoms and $\omega=0.5$ for Hooke's atom, and are all scaled to have a maximum value of $2$ for ease of comparison. We can immediately observe that all of the metrics follow broadly the same trend, increasing monotonically from the reference to their maximum value. The curves for both increasing and decreasing values of the parameters incorporate a region of rapidly increasing distance for parameter values close to the reference, a region where the distance asymptotically approaches its maximum for parameter values far from the reference, along with a transition region in between, where the largest differences between metrics are observed. The crucial difference between the four metrics, however, is how the metrics converge to the maximum value. Figure~\ref{compare_metrics} shows that, as we depart from the reference, the potential metric $D_{v_{1}}$ is the fastest to converge to its maximum, followed by the wavefunction metric, with the density metric being the slowest to converge. The behaviour of the metric $D_{v_{2}}$ is different for the two systems that we study. We firstly note the metric $D_{v_{2}}$ takes on different values for many-body and Kohn-Sham systems because, although they share the same density, many-body and related Kohn-Sham systems have different energies in general. For Helium-like atoms, this metric strongly follows the trend of the density metric for both many-body and Kohn-Sham quantities. However, when considering Hooke's atom, the potential metric $D_{v_{2}}$ is similar in value to the wavefunction metric, albeit slightly greater for frequencies greater than the reference. These results suggest that, when comparing systems that are significantly different from one another, the density metric is the most useful tool for analysis, as it is capable of providing non-trivial information over a wider range of parameter space than the metrics for wavefunctions and potentials. When comparing systems that are relatively close to one another, all four metrics provide useful information to quantitatively characterise the differences between the systems. With regard to practical calculations, the density metric $D_{\rho}$, along with the potential metric $D_{v_{2}}$, has another significant advantage in that, in general, it is considerably easier to calculate than the metrics $D_{\psi}$ and $D_{v_{1}}$. The metrics $D_{\rho}$ and $D_{v_{2}}$, in fact, need only be integrated over three degrees of freedom, compared to $3N$ degrees of freedom for the other two metrics. Also we can calculate the density metric from both the many-body and Kohn-Sham systems, since, unlike for wavefunctions and potentials, the Kohn-Sham system will, in principle, provide the exact many-body density. \section{Mappings relevant to the Hohenberg-Kohn Theorem} \begin{figure} \includegraphics[width=\columnwidth]{mb_ks_distances.pdf} \caption{(Color online) For (a) Helium-like atoms and (b) Hooke's atom, the distances between many-body and Kohn-Sham wavefunctions, and between many-body and Kohn-Sham potentials, are plotted against the parameter values. In addition, the ratio of the expectation of the electron-electron interaction to the many-body external potential energy is plotted and shown to follow a similar trend to the metrics. In the inset, we focus on Hooke's atom in the regime of distances covered by the Helium-like atoms.} \label{mb_ks} \end{figure} In Ref.~\citep{D'Amico2011} it was shown that the mapping between wavefunctions and densities in the Hohenberg-Kohn theorem [Eq.~(\ref{HK_map})] is a mapping between metric spaces; by examining it in this light several features were found. In this paper, we have shown that all of the relationships in Eq.~(\ref{HK_map}) are mappings between metric spaces: Using various families of states for each of our model systems, we will now look at the other relationships within the Hohenberg-Kohn theorem. We choose a reference state for each family of systems. We then calculate the distance between each member of the family and the reference state, for densities, wavefunctions, and potentials. In Fig.~\ref{hk_dv} we plot the potential metrics $D_{v_{1}}$ and $D_{v_{2}}$, respectively, against the wavefunction (left-hand panels) and density (right-hand panels) metrics for both interacting systems and their related Kohn-Sham systems and for increasing and decreasing parameters. In this way we compare for each plot eight different families of states as well as the behaviour of the many-body systems with respect to the non-interacting Kohn-Sham systems. The rescaling of the metrics has been chosen such that the dependence on the particle number is removed and that these figures are directly comparable to Fig.~2 of Ref.~\citep{D'Amico2011}, where corresponding plots for $D_{\psi}$ versus $D_{\rho}$ for Helium-like and Hooke's atoms were considered. Considering our plots, we observe many features in common with the relationship between wavefunction and density metrics of Ref.~\citep{D'Amico2011}: The relationships between the potential distances and the other distances are monotonic, with nearby wavefunctions and nearby densities mapped onto nearby potentials and distant wavefunctions and distant densities mapped onto distant potentials. The curves for increasing parameters and decreasing parameters within each of the four systems (Hooke's many-body, Hooke's Kohn-Sham, Helium-like many-body, Helium-like Kohn-Sham) are also seen to overlap, or almost overlap, with one another. Finally, all curves have an extended region (up to and including intermediate potential distances) where the relationship between potential and the other distances is linear or almost linear. Interestingly, depending on the potential distance and the system considered, we observe that this linear region can cover the entire parameter range; see Figs.~\ref{hk_dv}(a), \ref{hk_dv}(c), and~\ref{hk_dv}(d). With the exception of Fig.~\ref{hk_dv}(c), we notice that the curves have opposite convexity at large distances with respect to Fig.~2 of Ref.~\citep{D'Amico2011}, which suggests that, in general, the potential distance is more likely to converge to its maximum faster than wavefunction or density distances; hence, in general, it is less effective in distinguishing far-away systems (compare also with Fig.~\ref{compare_metrics}). In Ref.~\citep{D'Amico2011} a hint to universality was observed for the mapping between wavefunction and density distances; when looking at the potential versus wavefunction or density distances we note that the mapping from each many-body system is very close to the one from the corresponding exact Kohn-Sham system. This mapping is closer for Helium-like atoms compared to Hooke's Atom; this is because we are always in a weak-correlation regime for Helium-like atoms, while we consider both strong and weak correlation regimes for Hooke's Atom (see Fig.~\ref{mb_ks}). However, the mapping is less close when comparing the behaviour of Hooke's with respect to Helium-like atoms, and particularly so for the $D_{v_2}$ distance, for which the convexity of the corresponding curves at large distances may be opposite [compare curves for the two Kohn-Sham systems in Fig.~\ref{hk_dv}(c)]. \section{Quantitative analysis of the Kohn-Sham Scheme} We will now consider the distance between wavefunctions and potentials of many-body systems, and the ones used to describe the corresponding Kohn-Sham systems~\footnote{For densities, it is required that $D_{\rho}\rbr{\rho_{MB},\rho_{KS}}\equiv0$.}, and study how these distances change throughout the parameter range. This allows us to provide a \textit{quantitative} description of the differences between the many-body and exact Kohn-Sham descriptions of quantum systems. Although there is no promise from DFT for the many-body wavefunction to be reproduced by the Kohn-Sham ground-state wavefunction, the latter is commonly used as an approximation to the former in various contexts, such as linear response calculations in time-dependent DFT and some magnetic-system calculations, even if the regime of validity of this approximation has not been properly established. It is therefore of interest to quantitatively determine how good this approximation is. In Fig.~\ref{mb_ks}, the distances between many-body and Kohn-Sham wavefunctions and potentials are plotted for a range of parameter values. For potentials, we use here the metric $D_{v_{1}}$, since Eq.~(\ref{pot_metric2es}) shows that, in this case, the metric $D_{v_{2}}$ will yield only the difference in the energy of the two systems. We first observe that the wavefunction and potential distances, when rescaled to the same maximum value, always take approximately the same value throughout the parameter range explored for both systems. This demonstrates that the two metrics provide a consistent measure of how the many-body description differs from the Kohn-Sham description of our systems. For both systems we have also plotted the ratio of the Coulomb energy to the external potential energy for the many-body systems. This ratio can be seen to follow broadly the the same trend as the metrics. This is an important observation as it provides further confirmation that the metrics derived from the metric space approach to quantum mechanics provide a physically relevant comparison of quantum mechanical functions. It also shows that, alongside the two metrics and at least for the systems considered, this ratio is a useful indicator of how much the many-body and Kohn-Sham descriptions of the system differ from one another. If we consider as a good performance indicator that the distance between the many-body and Kohn-Sham wavefunctions is up to 10\% of the maximum distance [i.e., $D_{\psi}\rbr{\psi_{MB},\psi_{KS}}<0.2$], then we see that for all families of systems the Kohn-Sham wavefunction is indeed a good approximation for a relatively large range of parameters, for $Z>1.5$ for the Helium isoelectronic series and $\omega>1.25$ for Hooke's atom. For Helium-like atoms, even at $Z=1$, the maximum difference between the many-body and Kohn-Sham systems is just 17.5\%. For these systems, the external potential always dominates over the Coulomb interaction between the electrons, and we observe that the distance between the potentials is always larger than the distance between the wavefunctions. For Hooke's atom, for small and large values of $\omega$, we observe that the value of the potential metric is greater than that of the wavefunction metric, while, in the region where the ratio $\an{U}/\an{V}$ is approximately unity, the wavefunction metric takes a larger value than the potential metric. In the inset of Fig.~\ref{mb_ks}, we show the large $\omega$ behaviour of our metrics for Hooke's atom, which can be seen for Helium-like atoms in Fig.~\ref{mb_ks}(a). In this regime, both metrics and the ratio $\an{U}/\an{V}$ all tend to zero. This behaviour can be understood by considering the limit of the quantities of interest in the regime where the external potential strongly dominates over the Coulomb interaction. The Kohn-Sham external potential is the sum of the external potential used to describe the many-body system, the Hartree potential, and the exchange-correlation potential; in this regime, $V_{KS}\approx V_{ext}$, and hence $D_{v_{1}}\rbr{V_{KS},V_{ext}}\approx0$. Likewise, the many-body wavefunction approaches a non-interacting wavefunction which coincides with the Kohn-Sham wavefunction; hence, $D_{\psi}\rbr{\psi_{MB},\psi_{KS}}\approx0$. Physically, the wavefunction and potential distances between many-body and Kohn-Sham systems can be interpreted as a measure of specific electron-electron interaction effects. The Kohn-Sham wavefunction is the product of single-particle states; hence, the wavefunction distance can be interpreted as a measure of the features of the many-body wavefunction that go beyond single-particle approximations. In this respect this distance is a measure of correlation effects, which cannot be captured by mean-field-type approximations. For potentials, the value of the metric $D_{v_{1}}\rbr{V_{ext},V_{KS}}$ can be interpreted as measuring the contribution of the Hartree and exchange-correlation potentials to the Kohn-Sham potential. \section{Conclusion} The aim of this paper was to derive a metric for external potentials, which is motivated by their role in the Hohenberg-Kohn theorem, and more generally the crucial role external potentials play in modelling quantum systems. This metric complements the density and wavefunction metrics, providing us with metrics for each of the fundamental quantities of DFT. The tools we now have at our disposal have enabled us to take our metric analysis in other directions, such as the quantitative analysis of the Kohn-Sham scheme. In particular, since the density of Kohn-Sham and many-body interacting systems are the same, the potential metric is able to provide a meaningful insight into the Kohn-Sham scheme that the density metric cannot. By considering the conservation of energy and applying the metric space approach to quantum mechanics to it, we have derived two ``natural'' metrics for external potentials. These metrics can be applied to electronic systems subject to any physical scalar potential (including unbounded potentials such as Coulomb interactions), in eigenstates or out of equilibrium. We also showed how to extend our analysis to derive the potential metrics for systems incorporating both electronic and nuclear effects. This analysis can be straightforwardly extended to even more complex systems. We have also considered the effects of the gauge freedom of potentials and shown which conditions the metrics should satisfy to remain well defined when the preservation of relative energy differences are important to the problem considered. As for all metrics derived within the metric space approach to quantum mechanics, our potential metrics are characterised by well-defined maximum values, which makes it possible to compare quantitatively the behaviours of very different systems. Physical systems subject to scalar potentials are defined through their external potentials, densities and wavefunctions: Here we have analysed in detail eight families of systems, all in their ground states, so that these quantities are subject to a one-to-one mapping through the Hohenberg-Kohn theorem, the pillar of Density Functional Theory. These families are defined by increasing and decreasing parameters with respect to reference systems for the interacting Helium isoelectronic series, the interacting Hooke's atom with varying confinement strength, and the two corresponding families of non-interacting exact Kohn-Sham systems. When comparing the performances of the metrics, we found that they converged onto their maximum values at different rates, with the potential metric $D_{v_{1}}$ converging first, followed by the wavefunction metric, and finally by the density metric, with the behaviour of the potential metric $D_{v_{2}}$ depending on the system studied. This strengthens the findings in Ref.~\citep{D'Amico2011} that the density is the best quantity to differentiate between distant systems. Importantly, however, we find that, in general, two systems close to (or distant from) each other with respect to the metric for one physical quantity remain so with respect to the metrics for all physical quantities. In the context of the Hohenberg-Kohn theorem, in Ref.~\citep{D'Amico2011} it was found that in metric spaces the mapping between wavefunctions and densities was monotonic, and incorporated a (quasi) linear mapping between small and between intermediate distances. When examining in metric space the relationships of the external potential with wavefunctions and densities in the Hohenberg-Kohn theorem, we find once more surprisingly simple mappings and with a similar behaviour, with some curves showing an even greater range of linearity than the wavefunction-density mapping. These results are evidence of the deep connection between the quantities involved in the Hohenberg-Kohn theorem. However, while the interacting and related exact Kohn-Sham systems have almost identical behaviour, there are differences, especially at intermediate to large distance regions between Hooke's and Helium-like families, as opposed to Ref.~\citep{D'Amico2011}. We looked at the distance between many-body and Kohn-Sham quantities for both wavefunctions and external potentials, gaining quantitative insight into when, and by how much, the many-body and Kohn-Sham systems differ from one another. We showed that, when rescaled to the same maximum distance, wavefunctions and potentials provide a consistent picture, since they yield approximately the same distance values throughout all the parameter ranges considered. We also found that the two metrics followed the same qualitative trend as the ratio of Coulomb to external potential energies. The Kohn-Sham wavefunction has been used as an approximation to the many-body wavefunction, even if there is no promise of good behaviour, in this respect, from density functional theory. Our metrics allowed us to explore this approximation \textit{quantitatively}, at least for the systems at hand. For these systems we prove that the Kohn-Sham wavefunction indeed represents a well-behaved approximation which provides good quantitative results (10\% maximum error) for a relatively large range of the parameters explored. \begin{acknowledgments} We acknowledge fruitful discussions with E.K.U.~Gross. P.M.S. acknowledges support from EPSRC. P.M.S. and I.D. acknowledge support from Royal Society Grant NA140436 and CNPq Grant: PVE--Processo: 401414/2014-0. All data published during this research are available by request from the University of York Data Catalogue 10.15124/dc3868e7-38eb-4ef0-b97c-210773f2251c \end{acknowledgments}
1612.02262
\section{Introduction} Supernova Remnants (SNRs) are believed to be the sites where the bulk of Galactic Cosmic Rays (CRs) are accelerated up to PeV energies ($1~ \rm PeV=10^{15}~\rm eV$) \citep[see, e.g, ][]{Hillas2013,blasi13}. In recent years, significant progress has been achieved in a few directions of exploring the CR acceleration in SNRs, in particular using the $\gamma$-ray\xspace observations in the MeV/GeV and TeV energy bands \citep[see, e.g., ][]{Aharonian2013}. In particular, the detection of the so-called $\pi^0$-decay bump in the spectra of several mid-age SNRs, is considered as a substantial evidence of acceleration of protons and nuclei in SNRs. Moreover, the detection of more than ten young (a few thousand years old or younger) SNRs in TeV $\gamma$-rays highlights these objects as efficient particle accelerators, although the very origin of $\gamma$-rays (leptonic or hadronic?) is not yet firmly established. More disappointingly, so far all TeV emitting SNRs do not show energy spectra which would continue as a hard power-law beyond 10 TeV. For a hadronic origin of detected $\gamma$-rays, the "early" cutoffs in the energy spectra of $\gamma$-rays around or below 10 TeV imply a lack of protons inside the shells of SNRs with energies significantly larger than 100 TeV, and, consequently, SNRs do not operate as PeVatrons. However, there are two possibilities would allow us to avoid such a dramatic, for the current paradigm of Galactic CRs, conclusion: \vspace{1mm} \noindent (i) The detected TeV gamma-rays are of leptonic (Inverse Compton) origin. Of course, alongside with the relativistic electrons, protons and nuclei can (should) be accelerated as well, but we do not see the related $\gamma$-radiation because of their ineffective interactions caused by the low density of ambient gas; \vspace{2mm} \noindent (ii) SNRs do accelerate protons to PeV energies, however it occurs at early stages of evolution of SNRs when the shock speeds exceed 10,000 km/s; we do not see the corresponding radiation well above 10 TeV because the PeV protons already have left the remnant. \vspace{2mm} Both these scenarios significantly limit the potential of gamma-ray observations for the search for CR PeVatrons. Fortunately, there is another radiation component which contains an independent and complementary information about these extreme accelerators. It is related to the synchrotron radiation of accelerated electrons, namely to the shape of the energy spectrum of radiation in the cutoff region which can serve as a distinct signature of the acceleration mechanism and its efficiency. In the shock acceleration scheme, the maximum energy of accelerated particles, $E_0 \propto B \ v_{\rm sh}^2$. Therefore, the epoch of first several hundred years of evolution of a SNR, when the shock speed $v_{\rm sh}$ exceeds 10,000 km/s and the magnetic field is large, $B \gg 10 \ \mu$G, could be an adequate stage for operation of a SNR as a PeVatron, provided, of course, that the shock acceleration proceeds close to the Bohm diffusion limit \citep[see, e.g., ][]{Misha}. Remarkably, in this regime, the cutoff energy in the synchrotron radiation of the shock-accelerated electrons is determined by a single parameter, $v_{\rm sh}^2$ \citep{AhAt99,zirakashvili07}. Therefore, for the known shock speed, the position of the cutoff contains an unambiguous information about the acceleration efficiency. For $v_{\rm sh} \simeq 10,000$~km/s, the synchrotron cutoff in the spectral energy distribution (SED) is expected around 10~keV. Thus, the study of synchrotron radiation in the hard X-ray band can shed light on the acceleration efficiency of electrons, and, consequently, provide an answer whether these objects can operate as CR PeVatrons, given that in the shock acceleration scheme the acceleration of electrons and protons is expected to be identical. In this regard, G1.9+0.3, the youngest known SNR in our Galaxy \citep{reynolds08, green08}, is a perfect object to explore this unique tool. The X-ray observations with the Chandra and NuSTAR satellites \citep{reynolds09, zoglauer15} cover a rather broad energy interval which is crucial for the study of the spectral shape of synchrotron radiation in the cutoff region. Such a study has been conducted by the team of the NuSTAR collaboration \citep{zoglauer15}. However, some conclusions and statements of that paper seem to us rather confusing and, to a certain extent, misleading. In this paper we present the results of our own analysis of the NuSTAR and Chandra data with an emphasis on the study of the SED of X-radiation over two decades, from 0.3 keV to 30 keV. Using the synchrotron spectrum and the Markov Chain Monte Carlo (MCMC) technique, we derive the energy distribution of electrons responsible for X-rays, and discuss the astrophysical implications of the obtained results. \section{X-ray observations}\label{sec:data} The recent hard X-ray observations of G1.9+0.3 by the NuSTAR satellite are unique for understanding of the acceleration and radiation processes of ultrarelativistic electrons in SNRs at the early stages of their evolution. The detailed study of the NuSTAR data, combined with the Chandra observations at lower energies, have been comprehensively analysed by \citet{zoglauer15}. In particular, it was found that the source can be resolved into two bright limbs with similar spectral features. The combined Chandra and NuSTAR data sets have been claimed to be best described by the so-called {\it srcut} model \citep{Reynolds2008} or by the power-law function with an exponential cutoff. The characteristic cutoff energies in these two fits have been found around 3 keV and 15 keV, respectively \citep{zoglauer15}. To further investigate the features of the X-ray spectrum in the cutoff region we performed an independent study based on the publicly available Chandra and NuSTAR X-ray data. For NuSTAR, we used the set of three observations with ID 40001015003, 40001015005, 40001015007, including both the focal plane A (FPMA) and B (FPMB) modules. The data have been analysed using the HEASoft version 6.16, which includes NuSTARDAS, the NuSTAR Data Analysis Software package (the version 1.7.1 with the NuSTAR CALDB version 20150123). For the Chandra data, we used the ACIS observations with ID 12691, 12692 and 12694. The Chandra data reduction was performed by using the version 4.7 of the CIAO (Chandra Interactive Analysis of Observations) package. In Fig.\ref{fig:map} we show the X-ray sky map above 3~keV based on the NuSTAR 40001015007 data set. In order to gain from the maximum possible statistics, for the spectral analysis we have chosen the entire remnant . The background regions were selected in a way to minimise the contamination caused by the PSF wings as well as from the stray light. The excess in the south of the FPMA image is the stray light from X-rays that hit the detector without impinging on the optics \citep{wik14}. We use the same source regions for Chandra observations. The results of our study of the the spatial distribution of X-rays appeared quite similar the one reported by \citep{zoglauer15}. Therefore, in this paper we do not discuss the morphology of the source but focus on the study of spectral features of radiation. The spectral shape of synchrotron radiation in the cutoff region is sensitive to the spectrum of highest energy electrons which, in its turn, depends on the electron acceleration and energy loss rates. To explore a broad class of spectra, we describe the spectrum of X-rays in the following general form: \begin{equation} \frac{{\rm d}N}{{\rm d} \epsilon} = A E^{-\Gamma} \exp[-(\epsilon/\epsilon_0)^\beta_{\rm e}] \ . \label{spectrum} \end{equation} The change of the index $\beta$ in the second (exponential) term allows a broad range of spectral behaviour in the cutoff region. For example, $\beta=0$ implies a pure power-law distribution, while $\beta=1$ corresponds to a power-law with a simple exponential cutoff. In the fitting procedure, in addition to the three parameters $\epsilon_0, \Gamma$ and $\beta$, one should introduce one more parameter, the column density $N_{\rm H}$, which takes into account the energy-dependent absorption of X-rays. We fix this parameter to the value found by \citet{zoglauer15} from the fit of data by their {\it srcut} spectral model . Strictly speaking, the best fit value of the column density should be different for different spectral model. To check the impact of different spectral models on the column density, we adopted different functions leaving the column density as a free parameter in the fitting procedure. We found that the difference of the best fit column density and the above fiducial value is less than several percent. Therefore, in order to keep the procedure simple and minimise the number of free parameters, we adopt the value $N_{\rm H}=7.23 \times 10^{22}$ from the paper of \citet{zoglauer15}. The results of our fit of the NuSTAR and Chandra spectral points using the model ``power-law with exponential cutoff" in the general form of Eq.(1), i.e. leaving $\beta$, $\Gamma$ and $\epsilon$ as free parameters, are shown in Table 1. One can see that the best fit gives a rather narrow range of the index $\beta$ around 1/2. In Table we show separately also the results of the fits with three fixed values of $\beta$: 0, 1/2, and 1. While the pure-power law spectrum ($\beta=0$) can be unambiguously excluded, the model of power-law with a simple exponential cutoff ($\beta=1$) is not favourable either. It is excluded at the $3 \sigma$ statistical significance level. In summary, the combined Chandra and NuSTAR data are best described by the index $\beta_{\rm e} \approx 0.5$ and $\epsilon_0 \approx 1.5$~keV. Whereas $\beta=1/2$ seems to be a natural outcome (see below), the cutoff energy around 1.5 keV is a rather unexpected result. Namely, it implies that the acceleration of electrons in G1.9+0.3 proceeds significantly slower than one would anticipate given the very large, 14,000 km/s shock speed. This can be seen from the comparison of of the SED of G1.9+0.3 with one of the most effective particle accelerators in our Galaxy, $\approx$1600 year old SNR RX~J1713.4-3946 (see Fig.\ref{fig:sed1}). The cutoff energy in the synchrotron spectrum of shock-accelerated electrons is proportional to the square of shock speed $v_{\rm sh}^2$ \citep{AhAt99}. Therefore, in order to exclude the difference in the cutoff energies caused by the difference in the shock speeds, we rescale the energies of the spectral points of RX~J1713.4-3946 by the factor $(v_{\rm sh}/14,000 \ \rm km/s)^2$, where the shock speed of RX~J1713.4-3946 is about $v_{\rm sh} \simeq 4,000 \ \rm km/s$ \citep{Uchiyama07}. After such normalisation, the cutoff energy of RX J1713.4-3946 becomes an order of magnitude higher than the cutoff in G1.9+0.3. The acceleration of electrons in RX J1713.4-3946 proceeds close to the Bohm diffusion limit thus provides an acceleration rate close to the maximum value\citep{Uchiyama07,zirakashvili10}. Consequently, we may conclude that the current acceleration rate of electrons in G1.9+0.3 is lower, by an order of magnitude, compared to the maximum possible rate. It should be noted that the physical meaning of Eq.(\ref{spectrum}) should not be overestimated. Namely, it should be considered as a convenient analytical presentation of the given set of measured spectral points. Consequently, the $\Gamma,\beta,\epsilon_0$) that enter into Eq.(\ref{spectrum}), should be treated as a combination of formal fit parameters rather than physical quantities. For example, $\epsilon_0$ in the exponential term of Eq.(\ref{spectrum}) should not necessarily coincide with the cutoff energy (or maximum in the SED). Indeed, in different ($\Gamma,\beta,\epsilon_0$) combinations describing the same spectral points, the parameter $\epsilon_0$ could have significantly different values. Analogously, $\Gamma$ should not be treated as a power-law index but rather a parameter which, in combination with $\Gamma$ and $\beta$, determines the slope (the tangential) of the spectrum immediately before the cutoff region. The maximum acceleration rate of particles is achieved when it proceeds in the Bohm diffusion limit. In the energy-loss dominated regime, the spectra of synchrotron radiation can be expressed by simple analytical formulae \citep{zirakashvili07}. Because of compression of the magnetic field, the overall synchrotron flux of the remnant is dominated by the radiation from the downstream region (see Fig.\ref{fig:sed2}). The SED of the latter can be presented in the following form \citep{zirakashvili07 \begin{equation} \epsilon^2\frac{{\rm d}N}{{\rm d} \epsilon} \propto \epsilon^2 (\epsilon / \epsilon_0)^{-1} [1+0.38 (\epsilon/\epsilon_0)^{0.5}]^{11/4} \exp[-(\epsilon/\epsilon_0)^{1/2} \ . \label{shape} \end{equation} with \begin{equation} \epsilon _0= \hbar \omega _0= \frac {\mathrm{2.2\ keV}}{\eta (1+\kappa ^{1/2})^2}\left( \frac {u_1}{\mathrm{3000\ km\ }s^{-1}} \right) ^2 \ , \label{e0} \end{equation} where $\eta$ takes into account the deviation of the diffusion coefficient from its minimum value (in the nominal Bohm diffusion limit $\eta =1$). In the standard shock acceleration theory, the momentum index of accelerated electrons $\gamma_{\rm s}=4$, and the ratio of the upstream and downstream magnetic fields, $\kappa=1/\sqrt{11}$. \begin{figure*} \centering \includegraphics[width=0.4\linewidth]{fa.eps}\includegraphics[width=0.4\linewidth]{fb.eps} \caption{Images from the observation 40001015007 for the FPMA (left) and FPMB (right) modules. The source and background regions are indicated by the white and green contours, respectively.} \label{fig:map} \end{figure*} In Fig.\ref{fig:sed2} the spectral points of G1.9+0.3 are compared with the theoretical predictions for synchrotron radiation in the upstream and downstream regions \citep{zirakashvili07}. The calculations are performed for two values of the parameter $\eta$ characterising the acceleration efficiency: $\eta=1$ (Bohm diffusion regime) and 20 times slower ($\eta=20$). The good (better than 20 \%) agreement of the spectral points with the theoretical curves for $\eta=20$ tells us that in G1.9+0.3 electrons are accelerated only at the 5 \% efficiency level. Although in the paper of \citet{zoglauer15} the spectral points are not explicitly presented, thus the direct comparison with our results is not possible, the conclusions of our study on the energy spectrum of G1.9+0.3 seems to be in agreement with the results of \citet{zoglauer15}. However, because of the incorrect interpretation of the process of formation of the spectrum of synchrotron radiation, the statements in the paper by \citet{zoglauer15} are misleading (see Appendix \ref{app:a}). \begin{figure*} \centering \includegraphics[width=0.55\linewidth]{sed_vs.eps} \caption{The spectral points of G1.9+0.3 (this work; black circles) and RX~J1713.4-3946 (red square) from \citet{1713_suzaku}. The energies of the points of RX~J1713.4-3946 are rescaled by the factor of the square of the ratio of shock speeds of J1713.4-3946 and G1.9+0.3: $\rm (14,000 \ km/s / 4000 \ km/s)^2=12.25$. } \label{fig:sed1} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.55\linewidth]{cur.eps} \caption{The spectral points of G1.9+0.3 (this work) compared to the predictions of synchrotron radiation of the shock accelerated electrons in the downstream and upstream regions \citep{zirakashvili07} for two regimes of diffusion: Bohm diffusion $\eta=1$ and 20 times faster, $\eta=20$. } \label{fig:sed2} \end{figure*} \begin{table*}[htbp] \caption{Spectral Fitting results for G1.9+0.3 } \label{tab:1} \centering \begin{tabular}{lllllll} \hline model&\vline ~PL index&\vline ~cutoff (keV)&\vline ~$\beta$ &\vline $\chi ^2 /d.o.f.$&\\ \hline \hline PL &\vline ~2.54 (2.52 - 2.56) &\vline&\vline &\vline ~1089.4/666 &\\ \hline PL+ecut &\vline ~2.04 (1.98 - 2.10) &\vline~11.8 (10.5 - 13.3)&\vline &\vline ~697.7/665 &\\ \hline PL+ecut ($\beta$=0.5) &\vline ~1.65 (1.60 - 1.70)&\vline ~1.68 ( 1.50 - 1.90)&\vline ~0.5&\vline ~686.2/665\\ \hline PL+ecut ($\beta$ free) &\vline ~1.62(1.48 - 1.75)&\vline ~ 1.41 (1.30-1.55)&\vline ~0.48 (0.40-0.56)&\vline ~685.8/664 &\\ \hline \end{tabular} \end{table*} \section{Relativistic electrons and magnetic fields} The joint treatment of X-ray and $\gamma$-ray data, under the simplified assumption that the same electron population is responsible for the broad-band radiation through the synchrotron and inverse Compton channels, provides information about the magnetic field and the total energy budget in relativistic electrons. G1.9+0.3 has been observed in VHE $\gamma$-ray band with the H.E.S.S. Cherenkov telescope system. Although no positive signal has been detected \citep{hessG1.9}, the $\gamma$-ray flux upper limits allow meaningful constraints on the the average magnetic field in the X-ray and $\gamma$-ray production region. For calculations of the broad-band SED, we adopt the same background radiation fields used in the paper \citet{hessG1.9}: the infrared component with temperature of $48~\rm K$ and energy density of $1.5~\rm eV cm^{-3}$, and the optical component with temperature of $4300~\rm K$ and the energy density of $14.6~\rm eV cm^{-3}$. The comparison of model calculations with observations (see Fig.\ref{fig:SEDmodeling}) give a lower limit of the magnetic field, $B \geq 17 \rm \mu G$. Under certain assumptions, the magnetic field can be constrained also based only on the X-ray data. In the ``standard" shock acceleration scenario, electrons are accelerated with the power-law index $\alpha =2$. However because of the short radiative cooling time, their spectrum of highest energy electrons (the X-ray producers) becomes steeper, $\alpha=2 \to 3$. Consequently, in the downstream region, where the bulk of synchrotron radiation is produced, X-rays have a photon index $\Gamma=2$. The synchrotron cooling time can be expressed through the magnetic field and the X-ray photon energy: $t_{\rm synch} \simeq 50 (B/100 \rm \mu G)^{-3/2} (\epsilon/1~\rm keV)^{-1/2}$~years. Thus for $\epsilon \sim 1 \rm \ keV$ and the age of the SNR $\sim 150$~yr, we find that the magnetic field should be larger than $50 \mu$G. The combined Chandra and NuSTAR data cover two decades in energy, from sub-keV to tens of keV. This allows derivation of the energy distribution of electrons, $W(E)=E^2{\rm d}N_{\rm e}/{\rm d}E$ in the most interesting region around the cutoff. The results shown in Fig.\ref{fig:ele} are obtained using the Markov Chain Monte Carlo (MCMC) code {\it Naima} developed by V. Zabalza \footnote{\url{https://github.com/zblz/naima}}. It is assumed that the magnetic field is homogeneous both in space and time. The results shown in in Fig.\ref{fig:ele} are calculated for the fiducial value of the magnetic field $B=100~\rm \mu G$, however they can be rescaled for any other value of the field. Note that while the shape of the spectrum does not depend on the strength of the magnetic field, the energies of individual electrons scale as $E \propto B^{-1/2}$, and the total energy contained in electrons scales as $\propto B^{-2}$. Since in the ``standard" diffusive shock acceleration scenario the synchrotron X-ray flux is contributed mainly by the downstream region, the results in Fig.\ref{fig:ele} correspond to the range of the energy distribution of electrons for the same region. For comparison we show the energy distribution of electron calculated using the formalism of \cite{zirakashvili07}. Apparently, the good agreement between the derived electron spectrum with the theoretical curve for $\eta$ naturally reflects the agreement between the X-ray observations and the theoretical predictions as demonstrated in Fig.\ref{fig:sed2}. \begin{figure*} \centering \includegraphics[width=0.4\linewidth, height=0.3\linewidth]{sed_G19+03.eps} \caption{X-ray SED as well as the VHE upper limit from \citet{hessG1.9}. The curves are the synchrotron and IC emissions fitted to derive the lower limit of the magnetic field. } \label{fig:SEDmodeling} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.4\linewidth, height=0.3\linewidth]{electrondis.eps} \caption{Electron spectrum from the x-ray data points (black curve and shaded area) and theoretical predicted integrated electron spectrum in young SNR (red curve) assuming a fast diffusion, i.e., $\eta = 20$ in Eq.1. Also shown is the contribution from downstream region. } \label{fig:ele} \end{figure*} \section{Conclusions} SNRs are believed to be the major contributors to the Galactic CRs. The recent detections of TeV emission from more than ten young SNRs (of the age of a few thousand years or younger), demonstrates the ability of these objects to accelerate particles, electrons and/or protons, to energies up to 100 TeV. Yet, we do not have observational evidence of extension of hard $\gamma$-ray spectra well beyond 10~TeV. Therefore one cannot claim an acceleration of protons and nuclei by SNRs to PeV energies. On the other hand, one cannot claim the opposite either, given the possibility that the acceleration of PeV protons and nuclei could happen at the early stages of evolution of SNRs when the shock speeds exceed 10,000~km/s. Then, the escape of the highest energy particles at later stages of evolution of SNRs can explain the spectral steepening of gamma-rays at multi-TeV energies from $\geq 1$ thousand years old remnants. In this regard, the youngest known SNR in our Galaxy, G1.9+0.3, with the measured shock speed 14,000~km/s, seems to be a unique object in our Galaxy to explorer the potential of SNRs for acceleration of protons and nuclei to PeV energies. Such measurements have been performed with the H.E.S.S. array of Cherenkov telescopes. Unfortunately, no positive signal has been detected. On the other hand, the recent observations of G1.9+0.3 in hard X-rays by NuSTAR provide unique information about the acceleration efficiency of electrons. Together with Chandra data at lower energies, these data allow model-independent conclusions. Although the general shape of the energy spectrum of X-rays is in a very good agreement with predications of the diffusive shock-acceleration theory, the acceleration rate appears an order of magnitude slower relative to the maximum acceleration rate achieved in the nominal Bohm diffusion limit. To a certain extent, this is a surprise, especially when compared with young SNRs like Cas A and RX~J1713.4-3946 in which the acceleration of electrons proceeds in the regime close to the Bohm diffusion. If the acceleration of protons and nuclei proceeds in the same manner as the electron acceleration, this result could have a negative impact on the ability of G1.9+0.3 to operate as a PeVatron. Apparently, the observations of G1.9+0.3 alone are not sufficient to conclude whether this conclusion can be generalised for other SNRs.
1708.01051
\section{Introduction} \label{sec:intro} This paper introduces a new canonical decomposition in matching theory. In this section, we give a brief explanation of our results. Matching theory~\cite{lp1986} is one of the most classical and fundamental fields in combinatorics. Given a graph, a {\em matching} is a set of edges in which no two are adjacent. As small matchings such as a singleton exist trivially by definition, {\em maximum} matchings typically attracts great interest. As can be seen from the definition, a matching is a basic way to express pairings of elements, and therefore has been intensively studied not only in graph theory~\cite{DBLP:books/daglib/0030488} but also in algebra~\cite{duff1986direct, DBLP:journals/combinatorica/SzegedyS06, DBLP:conf/fct/Lovasz79, geelen2000, lp1986}. The role of matching theory in combinatorial optimization is especially important. In the decades since 1965, the remarkable growth of combinatorial optimization has been driven by {\em polyhedral combinatorics}~\cite{grotschel2012geometric, schrijver2003}, which explores a systematic and unified approach to numerous types of combinatorial problems through linear programming theory. The maximum matching problem serves as an archetypal prototype in polyhedral combinatorics~\cite{lp1986, schrijver2003}. Therefore, progress in the theory of matchings leads to benefits for the entire field of combinatorial optimization. {\em Canonical decompositions} are highly versatile tools that form the foundation of matching theory~\cite{lp1986}. A type of structure theorems exist that define a uniquely determined partition of a graph, and then use this partition to state the matching theoretic properties of the graph. A canonical decomposition is a way to understand graphs that is naturally derived from one of these structure theorems. The known canonical decompositions are the following: the {\em Dulmage-Mendelsohn decomposition}~\cite{dm1958,dm1959,dm1963}, {\em Kotzig-Lov\'asz decomposition}~\cite{kotzig1959a, kotzig1959b, kotzig1960, lovasz1972structure}, and {\em Gallai-Edmonds decomposition}~\cite{gallai1964, edmonds1965}. The power of each canonical decomposition originates partly from its uniqueness for a given graph. Therefore, in matching theory, the adjective ``canonical'' has come to mean being unique for a given graph, and being canonical itself has been considered important. However, we sometimes encounter problems that cannot be solved successfully with these canonical decompositions because they are applicable to only particular classes of graphs or do not provide sufficient information. The Dulmage-Mendelsohn and Kotzig-Lov\'asz decompositions target bipartite graphs and {\em consistently factor-connected graphs}, respectively. The Gallai-Edmonds decomposition, by definition, targets all graphs, but tends to be too sparse and, therefore, some classes of graphs, such as {\em factorizable graphs}, fall into trivially irreducible cases, which is a limitation that cannot be disregarded. To address these limitations, in this paper, we establish the {\em basilica decomposition}, which is a new canonical decomposition that is applicable to all graphs and provides much finer information than the Gallai-Edmonds decomposition. We derive this new canonical decomposition using the notion of {\em factor-components}, which serve as the fundamental building blocks that constitute a graph when studying matchings. The properties of the maximum matchings are captured by describing both how an entire given graph is constructed by factor-components and the inner structure of each factor-component. More precisely, the main results that constitute the new canonical decomposition are the following: \begin{rmenum} \item \label{item:intro:order} The organization of a given graph in terms of its factor-components can be understood as a partially ordered structure. The set of factor-components forms a poset with respect to a certain canonical binary relation, which is similar to the Dulmage-Mendelsohn decomposition. \item \label{item:intro:sim} A generalization of the Kotzig-Lov\'asz decomposition is provided that targets general graphs, which describes the inner structure of each factor-component in the context of the entire given graph. \item \label{item:intro:cor} Although \ref{item:intro:order} and \ref{item:intro:sim} are established independently, they have a certain canonical relationship that enables us to understand a graph as an architectural building-like structure in which these ideas are unified naturally. The integrated notion obtained from this relationship is our new canonical decomposition. \end{rmenum} Regarding our proofs, we obtain this new canonical decomposition without using any known results; thus, it is purely self-contained. Additionally, the proof that establishes the generalization of the Kotzig-Lov\'asz decomposition contains a greatly shortened and purely self-contained proof for the classical Kotzig-Lov\'asz decomposition. Considering the important role of canonical decompositions, we believe that our results will contribute to further development in combinatorics. In fact, several consequential results have been already obtained~\cite{DBLP:conf/cocoa/Kita13, kita2012canonical, kita2014alternative, kita2015graph}. The remainder of this paper is organized as follows. In Section~\ref{sec:def}, we present preliminary definitions and lemmas. In Section~\ref{sec:canonical}, we explain more about the technical background of the canonical decompositions and what we aim to establish in this paper. In Section~\ref{sec:props}, we list some elementary well-known lemmas used in later sections, with self-contained proofs. The new results of this paper appear in Section~\ref{sec:nonpositive} onward. In Section~\ref{sec:nonpositive}, we provide a statement about {\em consistently factor-connected graphs} that is used in later sections. The main theorems that establish the basilica decomposition are then presented; we present \ref{item:intro:order}, \ref{item:intro:sim}, and \ref{item:intro:cor} in Sections~\ref{sec:order}, \ref{sec:part}, and \ref{sec:cor}, respectively. In Section~\ref{sec:pertinentprops}, we present some properties of the basilica decomposition. In Section~\ref{sec:alg}, we propose a polynomial time algorithm for computing the basilica decomposition. Finally, in Section~\ref{sec:conclusion}, we conclude this paper. \section{Definitions}\label{sec:def} \subsection{General Statements} For standard definitions and notation for sets, graphs, and algorithms, we mostly follow Schrijver~\cite{schrijver2003}. In the following, we list those that may be non-standard or exceptional. We denote the vertex set of a graph $G$ by $V(G)$ and the edge set by $E(G)$. We treat paths and circuits as graphs; that is, a path is a connected graph in which every vertex is of degree two or less and at least one vertex is of degree less than two, whereas a circuit is a connected graph in which every vertex is of degree two. Given a path $P$ and vertices $x,y\in V(P)$, $xPy$ denotes the subpath of $P$ whose ends are $x$ and $y$. We sometimes regard a graph as its vertex set. As usual, a singleton $\{x\}$ is sometimes denoted simply by $x$. In the remainder of this section, unless otherwise stated, let $G$ be a graph and let $X\subseteq V(G)$. \subsection{Operations on Graphs} The subgraph of $G$ induced by $X$ is denoted by $G[X]$, and $G[\Vg\setminus X]$ is denoted by $G-X$. We denote by $G/X$ the contraction of $G$ by $X$. That is, $V(G/X) = V(G)\setminus X \cup \{x\}$, where $x\not\in V(G)$, and $E(G/X) = E(G) \setminus E(G[X]) \setminus \parcut{G}{X} \cup S$, where $S$ is obtained by replacing each edge $uv\in \parcut{G}{X}$ with $u\in X$ and $v\not\in X$ by $xv$. Let $\what{G}$ be a supergraph of $G$, and let $F\subseteq E(\hat{G})$. We denote by $G+F$ and $G-F$ the graphs obtained by adding $F$ to $G$ and deleting $F$ from $G$ without removing any vertices, respectively. The union of two subgraphs $G_1$ and $G_2$ of $G$ is denoted by $G_1 + G_2$. For simplicity, regarding these operations of creating a new graph from given graphs, we identify the vertices, edges, and subgraphs of the newly created graph with those of old graphs to which they naturally correspond. \subsection{Functions on Graphs} A {\em neighbor} of $X$ is a vertex in $V(G)\setminus X$ that is joined to a vertex in $X$. The set of neighbors of $X$ is denoted by $\parNei{G}{X}$. Given $Y, Z\subseteq V(G)$, $E_{G}[Y, Z]$ denotes the set of edges joining $Y$ and $Z$, and $\delta_{G}(X)$ denotes $E_{G}[X, V(G)\setminus X]$. We sometimes denote $E_G[X, Y]$, $\delta_G(X)$, $\parNei{G}{X}$ simply by $E[X,Y]$, $\delta(X)$, $\Gamma(X)$, respectively, if their subscripts are apparent from the context. \subsection{Matchings} A set of edges is a {\em matching} if any distinct two are disjoint. We say that a matching $M$ {\em covers} a vertex $v$ if $v$ is adjacent to an edge in $M$, otherwise we say that $M$ {\em exposes} $v$. A {\em maximum matching} is a matching with the greatest cardinality. A {\em perfect matching} is a matching that covers all vertices. Note that a perfect matching is a maximum matching but the converse does not necessarily hold. A graph is {\em factorizable} if it has a perfect matching. A {\em near-perfect matching} is a matching that covers all vertices except for one. A graph is {\em factor-critical} if, for any vertex, there is a near-perfect matching that exposes it. Let $M$ be a matching of a graph $G$. We say that $X$ is {\em closed with respect to} $M$ if $\parcut{G}{X}\cap M = \emptyset$. We denote $M\cap E(G[X])$ by $M_X$. We say that a path or circuit is $M$-alternating if edges in $M$ and not in $M$ appear alternately. More precisely, a circuit $C$ is {\em $M$-alternating} if $M\cap E(C)$ is a perfect matching of $C$. We define three types of $M$-alternating paths. Let $P$ be a path with ends $x$ and $y$. We say that $P$ is {\em $M$-saturated} or {\em $M$-exposed} between $x$ and $y$ if $M\cap E(P)$ or $E(P)\setminus M$, respectively, is a perfect matching of $P$. We say that $P$ is {\em $M$-forwarding} from $x$ to $y$ if $M\cap E(P)$ is a near-perfect matching of $P$ that exposes $y$. Accordingly, a path with one vertex is $M$-forwarding. That is, $M$-saturated and -exposed paths have an odd number of edges, whose ends are covered and exposed by $M$, respectively. In contrast, an $M$-forwarding path from $x$ to $y$ has an even number of edges, in which $x$ is covered by $M$ as long as this path has any edge whereas $y$ is always exposed. An {\em ear} relative to $X$ is a path with two distinct ends in $X$ such that any other vertex is disjoint from $X$, or a circuit such that exactly one vertex is in $X$. Let $P$ be an ear relative to $X$. Even if $P$ is a circuit, the {\em ends} are the vertices in $V(P)\cap X$, and the {\em internal} vertices are those in $V(P)\setminus X$. Hence, for convenience, if $x$ is the only end and $y$ is an internal vertex of $P$, we denote by $xPy$ one of the paths on $P$ between $x$ and $y$. The set of internal vertices of $P$ is denoted by $\earint{P}$. If $\earint{P}$ intersects $Y\subseteq V(G)$, then we say that $P$ {\em traverses} $Y$. We say that $P$ is an $M$-ear if $P\setminus X$ is an $M$-saturated path. \subsection{Gallai-Edmonds Family} Let $G$ be a graph. The set of vertices that are exposed by some maximum matchings is denoted by $D(G)$. The set $\parNei{G}{D(G)}$ is denoted by $A(G)$, and the set $V(G)\setminus D(G) \setminus A(G)$ is denoted by $C(G)$. We call $\{D(G), A(G), C(G)\}$ the {\em Gallai-Edmonds family} of $G$, because the {\em Gallai-Edmonds decomposition} is derived from a structure theorem regarding $D(G)$, $A(G)$, and $C(G)$. \subsection{Factor-Connected Components} Let $G$ be a graph. An edge $e\in E(G)$ is \textit{allowed} if there is a maximum matching of $G$ containing $e$. Let $C_1,\ldots, C_k$ be the connected components of the subgraph of $G$ determined by the union of allowed edges. We call $G[C_i]$ a {\em factor-connected component} or a {\em factor-component} of $G$ for each $i\in \{1,\ldots, k\}$. We denote the set of factor-connected components of $G$ by $\mathcal{G}(G)$. Thus, a graph is composed of its factor-connected components and the edges joining distinct factor-connected components. In addition, a set of edges is a maximum matching if and only if it is a disjoint union of maximum matchings taken from each factor-component. Hence, we can regard factor-components as the fundamental building blocks that determine the matching structure of a graph. A factor-component is {\em consistent} if it is disjoint from $D(G)$, otherwise it is {\em inconsistent}. It is also easily observed that a factor-component is a factorizable graph if and only if it is consistent. Therefore, given a maximum matching $M$, a factor-component $C$ is consistent if and only if $M_C$ is a perfect matching of $C$. The sets of consistent and inconsistent factor-components of $G$ are denoted by $\const{G}$ and $\inconst{G}$, respectively. A graph is {\em factor-connected} if it consists of only one factor-component. In particular, this graph is {\em consistently} factor-connected if its only factor-component is consistent. Note that any consistent factor-component is a consistently factor-connected graph. \section{Canonical Decompositions and Aim of Our Study} \label{sec:canonical} \subsection{Known Canonical Decompositions} We now explain more technical details of the canonical decompositions that were omitted from Section~\ref{sec:intro}. The {\em Dulmage-Mendelsohn}~\cite{dm1958, dm1959, dm1963}, {\em Kotzig-Lov\'asz}~\cite{kotzig1959a, kotzig1959b,kotzig1960,lovasz1972structure}, and {\em Gallai-Edmonds decompositions}~\cite{gallai1964, edmonds1965} are the three known canonical decompositions and have been extensively applied. They are provided by their respective structure theorems, which follow a certain common pattern: \begin{itemize} \item First, define a partition of a given graph into substructures, which is described matching theoretically and is, by definition, unique to each graph, such as the Gallai-Edmonds family or the set of factor-components. \item Second, provide statements about how the entire graph is structured and the maximum matchings it contains, such as where in the graph there are allowed or non-allowed edges, or the matching theoretic properties of the substructures determined by the partition. \end{itemize} Because these partitions are determined uniquely for a given graph, canonical decompositions can provide us with information about all maximum matchings, not just those of them that are specified in some way. They therefore exhibit a powerful and versatile nature. The traits of the three canonical decompositions are the following. \begin{itemize} \item The {\em Dulmage-Mendelsohn decomposition} states that, for bipartite graphs, the structure of factor-components can be described as a partially ordered set with respect to a certain binary relation. This decomposition provides an efficient solution of a system of linear equations by utilizing the sparsity of matrices~\cite{duff1986direct}. Additionally, it is the origin of {\em principal partition theory}~\cite{nakamura1988}, which is a branch of {\em submodular function theory}~\cite{fujishige2005}. \item The {\em Kotzig-Lov\'asz decomposition} captures the structure of consistently factor-connected graphs by defining a certain binary relation that is proved to be an equivalence relation. This decomposition is especially effective in the polyhedral study of matchings. From the Kotzig-Lov\'asz decomposition, many important results regarding the perfect matching polytopes have been obtained; see Lov\'asz and Plummer~\cite{lp1986} or Schrijver~\cite{schrijver2003} for surveys. \item Among them, the {\em Gallai-Edmonds decomposition} is probably the best known, because it is the essence of characterizing the size of a maximum matching and designing algorithms for computing maximum matchings. It has contributed to matching theory from many aspects. This decomposition provides properties of graphs based on the Gallai-Edmonds family. Some algorithms for computing the maximum matching algorithms are proposed using this decomposition~\cite{lp1986, cheriyan1997}. It also has applications in linear algebra~\cite{DBLP:conf/fct/Lovasz79, geelen2000}. \end{itemize} The exact statements of the three canonical decompositions are given in the following. The structures of graphs provided by Theorems~\ref{thm:dm}, \ref{thm:canonicalpartition}, and \ref{thm:gallaiedmonds} are the Dulmage-Mendelsohn, Kotzig-Lov\'asz, and Gallai-Edmonds decompositions, respectively. \begin{theorem}[Dulmage and Mendelsohn~\cite{dm1958, dm1959, dm1963}]\label{thm:dm} Let $G$ be a bipartite graph with color classes $A$ and $B$, and let $\mathcal{G}(G)$ be denoted by $\{ G_i : i \in I\}$, where $I = \{1,\ldots, |\mathcal{G}(G)|\}$. Let $A_i = V(G_i)\cap A$ and $B_i := V(G_i)\cap B$ for each $i \in I$. Then, there exists a partial order $\pardm{A}$ satisfying the following for any $i,j\in I$: \begin{enumerate} \item If $E[A_j, B_i] \neq\emptyset$, then $G_i\pardm{A} G_j$; and, \item if $G_i\pardm{A} H \pardm{A} G_j$ yields $G_i=H$ or $G_j = H$, then $E[A_j, B_i] \neq\emptyset$. \end{enumerate} \end{theorem} \begin{theorem}[Kotzig~\cite{kotzig1959a, kotzig1959b, kotzig1960} \label{thm:canonicalpartition} Let $G$ be a consistently factor-connected graph. Define a binary relation $\sim$ as follows: for $u, v\in V(G)$, $u\sim v$ holds if $G-u-v$ is not factorizable. Then, $\sim$ is an equivalence relation on $V(G)$, and accordingly, $\gpart{G}$ is a partition of $V(G)$, where $\gpart{G} := V(G)/\sim$. \end{theorem} \begin{theorem}[the Gallai-Edmonds structure theorem; Gallai~\cite{gallai1964}, Edmonds~\cite{edmonds1965}]\label{thm:gallaiedmonds} For any graph $G$, the following hold: \begin{rmenum} \item The graph $G[D(G)]$ consists of $|A(G)| + |V(G)|-2\nu(G)$ connected components, and each of them are factor-critical, whereas each connected component of $G[C(G)]$ is factorizable. \item Let $M$ be an arbitrary maximum matching of $G$. Then, for each connected component $K$ of $G[D(G)]$, the set $M_K$ is a near-perfect matching of $G$; each vertex in $A(G)$ is matched to a vertex in $D(G)$, and furthermore, if $u$ and $v$ are distinct vertices from $A(G)$, then the vertices to which they are matched belong to distinct connected components of $G[D(G)]$; for each connected connected component $L$ of $G[C(G)]$, the set $M_L$ is a perfect matching of $L$. \item All edges in $E[A(G), D(G)]$ are allowed, whereas no edge in $E(G[A(G)])$ or $E[A(G), C(G)]$ is allowed. \end{rmenum} \end{theorem} In addition to the statements in these three structure theorems that derive the canonical decompositions, additional fundamental properties are known for each canonical decomposition. These include properties that use the respective canonical decompositions to describe what happens after basic operations that frequently occur in graph theory, such as adding or deleting vertices and edges. These properties accordingly tell us how to make good use of each canonical decomposition. See, for example, Lov\'asz and Plummer~\cite{lp1986} for these properties. \subsection{Limitations of Classical Canonical Decompositions} Although the above canonical decompositions are quite useful, we sometimes encounter problems that cannot be solved with any of them, because each of them only targets a particular class of graphs or they can be too sparse to provide sufficient information. The Dulmage-Mendelsohn and Kotzig-Lov\'asz decompositions only target bipartite graphs and consistently factor-connected graphs, respectively. The Gallai-Edmonds decomposition, by definition, targets any graph $G$, however it mainly focuses on the structure of $G[A(G)\cup D(G)]$ and thus provides little information about the remainder of the graph, that is, $G[C(G)]$, which can be a vast portion. In particular, if a given graph $G$ is factorizable, then $D(G) = A(G) = \emptyset$ and $C(G) = V(G)$ hold; thus, the Gallai-Edmonds decomposition claims nothing about $G$. This cannot be disregarded because perfect matchings are themselves a notion that attracts intense attention. Of course, the classical Kotzig-Lov\'asz decomposition is applicable to each factor-component in $\comp{G[C(G)]}$, ignoring the other part; however, the information obtained by this operation is meaningless for the entire given graph in most contexts. \subsection{Our New Canonical Decomposition} In this paper, we present a new canonical decomposition, the {\em basilica decomposition}, that overcomes the limitations of the classical decompositions; that is, this targets all graphs and simultaneously provides further information that the Gallai-Edmonds decomposition cannot. The main concepts and theorems that constitute this new canonical decomposition are the following. \begin{rmenum} \item \label{item:order} How a graph is organized from its factor-component can be described by a partially ordered structure; we find a canonically defined partial order between factor-components, which is similar to that in the Dulmage-Mendelsohn decomposition (Theorem~\ref{thm:order}). \item \label{item:sim} We obtain a generalization of the Kotzig-Lov\'asz decomposition for general graphs (Theorem~\ref{thm:generalizedcanonicalpartition}). This generalization considers the entire structure of a given graph and provides finer information than repeated applications of the classical Kotzig-Lov\'asz decomposition. \item \label{item:cor} There is a relationship between the above two concepts, even though they are defined independently (Theorem~\ref{thm:base}). This relationship unites the two notions into a canonical decomposition, in which we can view a graph as an architectural building-like structure. We name this new canonical decomposition the {\em basilica decomposition}. \end{rmenum} Note how this new canonical decomposition is obtained. All the new statements are provided with self-contained proofs in this paper, except for the algorithmic result in Section~\ref{sec:alg} that computes the basilica decomposition in polynomial time. Additionally, our results contains a greatly shortened proof of the Kotzig-Lov\'asz decomposition, which is also completely self-contained. \section{Basic Properties}\label{sec:props} \subsection{On Matchings} We now present some basic properties of matchings. We will sometimes use these properties implicitly. These are easily observed by parity arguments or by taking symmetric differences of matchings, and readers familiar with matching theory may wish to skip this subsection. \begin{lemma}\label{lem:cut2forwarding} Let $G$ be a graph and $M$ be a matching of $G$. Let $X\subseteq V(G)$ be closed with respect to $M$, and let $x \in V(G)\setminus X$ and $y \in X$. Let $P$ be a path that is $M$-forwarding from $x$ to $y$ or $M$-saturated between $x$ and $y$. Let $z\in V(P)$ be the first vertex in $X$ that we encounter if we trace $P$ from $x$. Then, $xPz$ is an $M$-forwarding path from $x$ to $z$ with $V(xPz)\cap X = \{z\}$. \end{lemma} \begin{lemma}\label{lem:allowed} Let $G$ be a factorizable graph and $M$ be a perfect matching of $G$, and let $xy \in E(G)\setminus M$. The following three properties are equivalent: \begin{enumerate} \renewcommand{\labelenumi}{\theenumi} \renewcommand{\labelenumi}{{\rm \theenumi}} \renewcommand{\theenumi}{(\roman{enumi})} \item \label{item:allowed} The edge $xy$ is allowed in $G$. \item \label{item:circuit} There is an $M$-alternating circuit $C$ with $xy\in E(C)$. \item \label{item:path} There is an $M$-saturated path between $x$ and $y$. \end{enumerate} \end{lemma} \subsection{On the Gallai-Edmonds Family and Factor-components} We now present some observations about factor-components. Lemmas~\ref{lem:da2path} and \ref{lem:a2d} are known statements that can be found in Edmonds' algorithm for maximum matchings or the Gallai-Edmonds structure theorem. These are easily confirmed. \begin{definition} Let $G$ be a graph. The set of vertices that are exposed by some maximum matchings are denoted by $D(G)$. The set $\parNei{G}{D(G)}$ is denoted by $A(G)$, and the set $V(G)\setminus D(G) \setminus A(G)$ is denoted by $C(G)$. \end{definition} \begin{lemma}\label{lem:da2path} Let $G$ be a graph and $M$ be a maximum matching. \begin{rmenum} \item \label{item:da2path:d} A vertex $x$ is in $D(G)$ if and only if there exists an $M$-forwarding path from a vertex exposed by $M$ to $x$. \item \label{item:da2path:a} If a vertex $x$ is in $A(G)$, then there exists an $M$-exposed path between $x$ and a vertex exposed by $M$. \end{rmenum} \end{lemma} \begin{lemma} \label{lem:a2d} Let $G$ be a graph. For any maximum matching of $G$, the vertex to which a vertex in $A(G)$ is matched is in $D(G)$. Accordingly, no edge in $\cut{C(G)}$ is allowed. \end{lemma} The next proposition, which characterizes consistent and inconsistent factor-components, follows immediately from Lemma~\ref{lem:a2d}. \begin{proposition}\label{prop:fcomp2dac} Let $G$ be a graph. A factor-component of $G$ is inconsistent if and only if it is a factor-component of $G[A(G)\cup D(G)]$. A factor-component of $G$ is consistent if and only if it is a factor-component of $G[C(G)]$. \end{proposition} \subsection{On Factor-critical Graphs} We now present some fundamental properties of factor-critical graphs. Some of these are well-known, but we again present their proofs. The next one can be easily obtained by considering symmetric differences of matchings: \begin{lemma} \label{lem:path2root} Let $M$ be a near-perfect matching of a graph $G$ that exposes $v\in \Vg$. Then, $G$ is factor-critical if and only if for any $u\in \Vg$ there exists an $M$-\zero path from $u$ to $v$. \end{lemma} Lemma~\ref{lem:path2root} leads to the following three statements: \begin{lemma}\label{lem:fc2union} Let $G$ be a graph and $M$ be a matching of $G$. Let $H_1$ and $H_2$ be factor-critical subgraphs of $G$ such that there exists $v\in V(H_1)\cap V(H_2)$ and, for each $i \in \{ 1, 2\}$, $M_{H_i}$ is a near-perfect matching of $H_i$ exposing only $v$. Then, $H_1 + H_2$ is factor-critical. \end{lemma} \begin{proof} Obviously, $M_1\cup M_2$ is a near-perfect matching of $H_1 + H_2$, exposing only $v$. As $H_1$ and $H_2$ are both factor-critical, the claim follows from Lemma~\ref{lem:path2root}. \qed \end{proof} \begin{proposition}[implicitly stated in Lov\'asz~\cite{lovasz1972a}]\label{prop:fc_choice} Let $G$ be a factor-critical graph, let $v\in V(G)$, and let $M$ be a near-perfect matching that exposes $v$. Then, for any $e\in \cut{v}$, there is an $M$-ear relative to $v$ that contains $e$. \end{proposition} \begin{proof Let $u\in V(G)$ be the end of the edge $e$ other than $v$. From Lemma~\ref{lem:path2root}, there is an $M$-forwarding path $P$ from $u$ to $v$. Thus, $P+e$ is a desired $M$-ear. \qed \end{proof} \begin{theorem}[implicitly stated in Lov\'asz~\cite{lovasz1972a}]\label{thm:fc_nice} Let $G$ be a factor-critical graph. For any factor-critical subgraph $G'$ such that $G-V(G')$ is factorizable, the graph $G/G'$ is factor-critical. \end{theorem} \begin{proof Let $M$ be a perfect matching of $G-V(G')$. Note that $M$ is also a near-perfect matching of $G/G'$ that exposes the vertex $g'$ corresponding to $G'$. Arbitrarily choose $v\in V(G')$, and let $M'$ be a near-perfect matching of $G'$ that exposes $v$. Then, $M'\cup M$ is a near-perfect matching of $G$ that exposes $v$. Let $x$ be an arbitrarily chosen vertex in $V(G)\setminus V(G')$. From Lemma~\ref{lem:path2root}, there is an $M'\cup M$-forwarding path $P$ from $x$ to $v$. Trace $P$ from $x$, and let $y$ be the first encountered vertex in $V(G')$. Then, in the graph $G/G'$, the path $xPy$ corresponds to an $M$-forwarding path from $x$ to $g'$. Hence, from Lemma~\ref{lem:path2root} again, $G/G'$ is factor-critical. \qed \end{proof} \section{Structure of Alternating Paths in Consistently Factor-connected Graphs} \label{sec:nonpositive} We now present our new results. In this section, we prove a proposition about consistently factor-connected graphs to be used in later sections. \begin{proposition}\label{prop:nonpositive} Let $G$ be a consistently factor-connected graph and $M$ be a perfect matching of $G$. Then, for any two vertices $u,v \in V(G)$, there is an $M$-saturated path between $u$ and $v$, or an $M$-\zero path from $u$ to $v$. \end{proposition} \begin{proof} Let $u\in V(G)$ be an arbitrary vertex. Let $U\subseteq V(G)$ be the set of vertices that can be reached from $u$ by $M$-saturated or $M$-forwarding paths. We obtain this proposition by showing $U = V(G)$. Suppose, to the contrary, $U \subsetneq V(G)$. \begin{cclaim}\label{claim:nonpositive:contained} Let $v\in U$, and let $P$ be an $M$-saturated path between $u$ and $v$ or an $M$-forwarding path from $u$ to $v$. Then, $V(P)\subseteq U$ holds. \end{cclaim} \begin{proof} Let $w\in V(P)$. Then, $uPw$ is an $M$-saturated path between $u$ and $w$ or an $M$-forwarding path from $u$ to $w$. Therefore, $w\in U$ holds. Hence, we have $V(P)\subseteq U$. \qed \end{proof} As $G$ is connected, it has some edges that join $U$ and $V(G)\setminus U$. \begin{cclaim}\label{claim:nonpositive:nosaturate} Let $v\in U \cap \Gamma(V(G)\setminus U)$. Then, there is no $M$-saturated path between $u$ and $v$. \end{cclaim} \begin{proof} Suppose this claim fails, and let $P$ be an $M$-saturated path between $u$ and $v\in U \cap \Gamma(V(G)\setminus U)$. From Claim~\ref{claim:nonpositive:contained}, $V(P)\subseteq U$ holds. Therefore, the vertex $v'$ is in $U$, and by letting $w \in V(G)\setminus U$ be a vertex with $vw \in E(G)$ we have $vw \not\in M$. Hence, $P + vw$ is an $M$-forwarding path from $u$ to $w$, which contradicts $w\not\in U$, and this claim is proved. \qed \end{proof} \begin{cclaim}\label{claim:nonpositive:notinm} No edge joining $U$ and $V(G)\setminus U$ is in $M$. \end{cclaim} \begin{proof} Let $vw$ be an edge with $v\in U$ and $w\in V(G)\setminus U$. From Claims~\ref{claim:nonpositive:contained} and \ref{claim:nonpositive:nosaturate}, there is an $M$-forwarding path $P$ from $u$ to $v$ with $V(P)\subseteq U$ . Hence, if $vw\in M$ then $P+vw$ is an $M$-saturated path between $u$ and $w$, and this contradicts $w\not\in U$. Therefore, $vw \not \in M$ follows, and this claim is proved. \qed \end{proof} As $G$ is factor-connected, some edges in $E[U, V(G)\setminus U]$ are allowed. Let $e = vw$ be one of these edges. From Claim~\ref{claim:nonpositive:notinm}, $e\not\in M$ holds, and therefore, from Lemma~\ref{lem:allowed}, there is an $M$-saturated path $Q$ between $v$ and $w$. Trace $P$ from $u$, and let $x$ be the first vertex we encounter that is in $Q$; such $x$ certainly exists under the current hypotheses because $v\in V(P)\cap V(Q)$ holds. Note that by this definition of $x$, $uPx + xQ\alpha$ forms a path for each $\alpha \in \{v, w\}$. \begin{cclaim}\label{claim:nonpositive:upx} The path $uPx$ is $M$-forwarding from $u$ to $x$. \end{cclaim} \begin{proof} Suppose this claim fails, that is, $uPx$ is an $M$-saturated path. Then, we have $x' \in V(uPx)$; however, at the same time, we have $x' \in V(Q)$, because $x\in V(Q)$ holds and $Q$ is an $M$-saturated path. This contradicts the definition of $x$, and this claim is proved. \qed \end{proof} Note also that, for $\alpha$, which is equal to either $v$ or $w$, $xQ\alpha$ is an $M$-saturated path. Hence, from Claim~\ref{claim:nonpositive:upx}, for this $\alpha$, it follows that $uPx + xQ\alpha$ is an $M$-saturated path between $u$ and $\alpha$. Thus, $w\in U$ holds, which is a contradiction. This completes the proof of this proposition. \qed \end{proof} \if0 \begin{proof} Without loss of generality we can assume $G$ is matching-covered, that is, every edge of $G$ is allowed. Let $u\in V(G)$ be an arbtrary vertex. Let $U_1\subseteq V(G)$ be the set of vertices that can be reached from $u$ by an $M$-saturated path, and $U_2\subseteq V(G)$ be the set of vertices that can be reached from $u$ by an $M$-\zero path but cannot be by any $M$-saturated paths. We are going to obtain the claim by showing $U:= U_1\dot{\cup} U_2 = V(G)$. Suppose that it fails, namely that $U \subsetneq V(G)$. Then there are $v\in U$ and $w\in V(G)\setminus U$ such that $vw\in E(G)$, since $G$ is connected. \begin{cclaim} The edge $vw$ is not in $M$, and there is an $M$-balaced path from $u$ to $v$ whose vertices are all contained in $U$. \end{cclaim} \begin{proof} By the definition of $U$, there is a path $P$ which is $M$-saturated between $u$ and $v$ or $M$-forwarding between $u$ to $v$; aslo it satisfies $V(P)\subseteq U$, since for each $z \in V(P)$ $uPz$ is an $M$-saturated path between $u$ and $z$ or an $M$-forwarding path from $u$ to $z$. If $P$ is $M$-saturated, therefore, $P+vw$ is an $M$-forwarding path from $u$ to $w$, which means $w\in U$, a contradiction. Therefore, $P$ is an $M$-forwarding path from $u$ to $v$; now the latter part of the claim follows. Now, if $vw\in M$, $P + vw$ froms an $M$-saturated path from $u$ to $w$, which means $w\in U$, a contradiction. Therefore, we have $vw\not\in M$; this comples the proof. \qed \end{proof} Since $vw$ is defined to be allowed, there is an $M$-saturated path $Q$ between $v$ and $w$ by Proposition~\ref{lem:allowed2circuit}. Trace $P$ from $u$ and let $x$ be the first vertex we encounter that is in $Q$; such $x$ surely exists under the current hypotheses since $v\in V(P)\cap V(Q)$. Note that, by this definition of $x$, $uPx + xQw$ forms a path. \begin{cclaim}\label{claim:u2x} $uPx$ is an $M$-forwarding path. \end{cclaim} \begin{proof} Suppose the claim fails, which is equivalent to $uPx$ being an $M$-saturated path. Then, $x'\in V(uPx)$. On the other hand, since $Q$ is $M$-saturated, $x'\in V(Q)$. Therefore, $x'\in V(uPx)\cap V(Q)$, which means we encounter $x'$ before $x$ if we trace $P$ from $u$, a contradiction. \qed \end{proof} \begin{cclaim}\label{claim:x2w} $xQw$ is an $M$-saturated path between $x$ and $w$. \end{cclaim} \begin{proof} If $x = v$, $xQx$ is a trivial $M$-forwarding path from $v$ to $x$. Even if $x \neq v$, so is it by Proposition~$\clubsuit$. Anyway, whether $x = v$ or not, $vQx$ is an $M$-forwarding path from $v$ to $x$. Therefore, together with $vQw$ being an $M$-saturated path, $xQw$ is an $M$-forwarding path from $x$ to $w$. \qed \end{proof} By Claims~\ref{claim:u2x} and \ref{claim:x2w}, $uPx + xQw$ is an $M$-saturated path between $u$ and $w$. Hence, $w\in U$, a contradiction; namely, we obtain $U = V(G)$, which completes the proof. \qed \end{proof} \fi \section{Partially Ordered Structure}\label{sec:order} In this section, we prove that the factor-components of a graph form a partially ordered set with respect to a certain canonical binary relation that we define here. As stated before, factor-components are the fundamental building blocks of a graph, in that a graph consists of its factor-components and the edges between them. However, how a graph is constructed from factor-components and edges is not arbitrary but follows a certain rule. That is, given some factor-connected graphs, construct a new graph by joining them with edges in an arbitrary manner. The factor-components of the resulting graph will not be in general equal to the original factor-connected graphs. We show that the rule of the factor-components is an ordered structure. \begin{definition} Given a graph $G$, a set $X\subseteq V(G)$ is {\em separating} if it is a disjoint union of the vertex sets of some factor-components, i.e., if there exist $H_1,\ldots, H_k\in\comp{G}$, where $k\ge 1$, such that $X = V(H_1)\dot{\cup}\cdots \dot{\cup} V(H_k)$. \end{definition} Note that a nonempty set $X$ is separating if and only if $\cut{X}\cap M = \emptyset$ holds for any maximum matching $M$. \begin{definition} Let $G$ be a graph, and let $G_1,G_2\in\mathcal{G}(G)$. A separating set $X$ is a {\em critical-inducing set for} $G_1$ if $V(G_1)\subseteq X$ holds and $G[X]/G_1$ is a factor-critical graph. Moreover, we say that $X$ is a {\em critical-inducing set for} $G_1$ {\em to} $G_2$ if $V(G_1)\cup V(G_2)\subseteq X$ holds and $G[X]/G_1$ is a factor-critical graph. We say $G_1\yield G_2$ if there is a critical-inducing set for $G_1$ to $G_2$. \end{definition} We show that $\yield$ is a partial order in Theorem~\ref{thm:order}. Reflexivity is obvious from the definition, hence the following lemmas are provided for transitivity and antisymmetry. First of all, observe the following: \begin{lemma}\label{lem:order2const} Let $G$ be a graph. If $X$ is a critical-inducing set for a factor-component $H \in\comp{G}$ such that $X \neq V(H)$, then $X\setminus V(H)\subseteq C(G)$ holds. Consequently, for any maximum matching $M$ of $G$, $M_{X\setminus V(H)}$ is a perfect matching of $G[X\setminus V(H)]$. Accordingly, if $G_1\yield G_2$ holds for two distinct factor-components $G_1$ and $G_2$, then $G_2$ is consistent. \end{lemma} \begin{proof} As $G[X]/H$ is factor-critical, $X\setminus V(H)$ is a separating set such that $G[ X \setminus V(H)]$ has a perfect matching. Therefore, the factor-components that comprise $X\setminus V(H)$ are consistent, which implies from Proposition~\ref{prop:fcomp2dac} that they are contained in $C(G)$. The remaining claims now follows immediately. \qed \end{proof} The following three lemmas can be easily confirmed by analogy between factor-critical graphs and critical-inducing sets. The next lemma follows from Lemmas~\ref{lem:path2root} and \ref{lem:cut2forwarding}. \begin{lemma}\label{lem:path2base} Let $G$ be a graph, $M$ be a maximum matching of $G$, and $X\subseteq V(G)$ be a separating set, and let $G_1\in\mathcal{G}(G)$. The following three statements are equivalent. \begin{rmenum} \item \label{item:path2base:main} The set $X$ is a critical-inducing set for $G_1$. \item \label{item:path2base:pathcut} For any $x\in X\setminus V(G_1)$, there exists $y\in V(G_1)$ such that there is an $M$-forwarding path from $x$ to $y$ whose vertices except $y$ are in $X\setminus V(G_1)$. \item \label{item:path2base:pathin} For any $x\in X\setminus V(G_1)$, there exists $y\in V(G_1)$ such that there is an $M$-forwarding path from $x$ to $y$. \end{rmenum} \end{lemma} The next lemma is immediate from Lemma~\ref{lem:fc2union}. \begin{lemma}\label{lem:union} Let $G$ be a graph, and let $G_1\in \mathcal{G}(G)$. If $X_1, X_2 \subseteq V(G)$ are critical-inducing sets for $G_1$, then $X_1\cup X_2$ is also a critical-inducing set for $G_1$. \end{lemma} The next one is easily obtained from Proposition~\ref{prop:fc_choice} and Theorem~\ref{thm:fc_nice}. \begin{lemma} \label{lem:inductive-ear} Let $G$ be a graph and $M$ be a maximum matching of $G$, and let $G_1\in\mathcal{G}(G)$. Let $X$ and $X'$ be critical-inducing sets for $G_1$ with $X'\subseteq X$. Then, $G[X]/X'$ is factor-critical, and $M_{X\setminus X'}$ is a near-perfect matching of it that exposes only the contracted vertex corresponding to $X'$. Moreover, if $X\subsetneq X$ holds, then there exists an $M$-ear relative to $X'$ whose internal vertices are not empty and are contained in $X\setminus X'$. \end{lemma} Transitivity of $\yield$ now follows rather easily: \begin{lemma}\label{lem:transitivity} Let $G$ be a graph and $G_1$, $G_2$, $G_3$ be factor-components of $G$. If $G_1 \yield G_2$ and $G_2 \yield G_3$ hold, then $G_1 \yield G_3$ holds. \end{lemma} \begin{proof} Let $M$ be a maximum matching of $G$. Let $X_1$ and $X_2$ be critical-inducing sets for $G_1$ to $G_2$ and for $G_2$ to $G_3$, respectively. We prove that $X_1\cup X_2$ is a critical-inducing set for $G_1$ to $G_3$. First, $X_1\cup X_2$ is obviously a separating set that contains $G_1$ and $G_3$. Take $x\in X_1\cup X_2$ arbitrarily. If $x\in X_1$ holds, then, from Lemma~\ref{lem:path2base}, there exists an $M$-forwarding path $P_x$ from $x$ to a vertex in $V(G_1)$ with $V(P_x)\subseteq X_1$. If $x\in X_2\setminus X_1$ holds, then, from Lemma~\ref{lem:path2base}, there is an $M$-forwarding path $Q_x$ from $x$ to a vertex in $G_2$. From Lemma~\ref{lem:cut2forwarding}, there exists $y\in X_1$ such that $xQ_xy$ is an $M$-forwarding path with $V(xQ_xy)\cap X_1 = \{y\}$. We obtain an $M$-forwarding path from $x$ to a vertex in $V(G_1)$, that is, $xQ_xy+P_y$. Therefore, from Lemma~\ref{lem:path2base}, $X_1\cup X_2$ is a critical-inducing set for $G_1$ to $G_3$, and the proof is complete. \qed \end{proof} In the following, we provide definitions and lemmas to prove antisymmetry of $\yield$. \begin{definition} Let $G$ be a graph and $M$ be a maximum matching of $G$. Let $X_0$ be a nonempty proper subset of $V(G)$. \begin{rmenum} \item Let $X\subseteq V(G)$ be a nonempty set of vetices that is disjoint from $X_0$ and is closed with respect to $M$. \item Let $P$ be an $M$-ear relative to $X_0$ with $\earint{P}\neq\emptyset$ and $\earint{P}\subseteq X$. \end{rmenum} For each $x\in X$, define a set of paths $\pathfamily{x}{X}{P}{X_0}{M}{G}$ as follows: A path $Q$ is an element of $\pathfamily{x}{X}{P}{X_0}{M}{G}$ if it is $M$-forwarding from $x$ to a vertex $y\in \earint{P}$ with $V(Q)\subseteq X$ and $V(Q) \cap V(P) = \{y\}$. Additionally, we define a property $\parcondxp{X}{P}{X_0}{M}{G}$ as follows: $\parcondxp{X}{P}{X_0}{M}{G}$ is true if $\pathfamily{x}{X}{P}{X_0}{M}{G} \neq \emptyset$ for each $x\in X$. \end{definition} Lemmas~\ref{lem:int2root} to \ref{lem:order} in the following present propertites of $\Psi$. Among these lemmas, Lemma~\ref{lem:order} is used directly in the proof of Theorem~\ref{thm:order}. For two factor-components $G_1$ and $G_2$ with $G_1\yield G_2$, this lemma states that there exist $X$ and $P$ with $\parcondxp{X}{P}{G_1}{M}{G}$ and $V(G_2)\subseteq X$. This statement derives that $G_2 \yield G_1$ does not hold and proves antisymmetry of $\yield$. Lemmas~\ref{lem:int2root}, \ref{lem:compclosure}, and \ref{lem:extension} are used to prove Lemma~\ref{lem:order}. Lemma~\ref{lem:component2cut} is provided for Lemma~\ref{lem:compclosure}. Lemma~\ref{lem:int2root} is derived rather easily by considering contatenation of paths. \begin{lemma}\label{lem:int2root} Let $G$ be a graph, $M$ be a maximum matching of $G$, and $X_0$ be a nonempty proper subset of $V(G)$. If $\parcondxp{X}{P}{X_0}{M}{G}$ holds for $X$ and $P$, then, for any $x\in X$, $P$ has an end $w$ such that there exists an $M$-forwarding path $R$ from $x$ to $w$ with $V(R)\setminus \{w\} \subseteq X$. \end{lemma} \begin{proof} Let $x\in X$, and let $Q\in \pathfamily{x}{X}{P}{X_0}{M}{G}$ be an $M$-forwarding path from $x$ to a vertex $y \in \earint{P}$. Let $w$ be the end of $P$ such that $yPw$ is an $M$-forwarding path from $y$ to $w$. Then, $Q + yPw$ is an $M$-forwarding path with the desired property. \qed \end{proof} The next lemma is an observation about the intersection of a consistent factor-component and a set of vertices closed with respect to a maximum matching. \begin{lemma} \label{lem:component2cut} Let $G$ be a graph and $M$ be a maximum matching of $G$. Let $X\subseteq V(G)$ be closed with respect to $M$, and let $H\in\const{G}$ be such that $V(H)\cap X \neq \emptyset$. Then, for any $x\in V(H)$, there exist a vertex $y\in X$ and an $M$-forwarding path $P$ from $x$ to $y$ with $V(P)\setminus \{y\} \subseteq V(H)\setminus X$. \end{lemma} \begin{proof} Take $z\in X\cap V(H)$ arbitrarily. From Proposition~\ref{prop:nonpositive}, there is a path $Q$ that is $M$-forwarding from $x$ to $z$ or $M$-saturated between $x$ and $z$. Trace $Q$ from $x$, and let $y$ be the first vertex in $X$ that we encounter. Then, from Lemma~\ref{lem:cut2forwarding}, $xQy$ is a desired path. \qed \end{proof} The next lemma is derived from Lemma~\ref{lem:component2cut} and is used to prove Lemma~\ref{lem:order}. \begin{lemma}\label{lem:compclosure} Let $G$ be a graph and $M$ be a maximum matching of $G$. Let $X_0$ be a nonempty proper subset of $V(G)$ that is separating. If a set of vertices $X \subseteq C(G)$ and an $M$-ear $P$ relative to $X_0$ satisfy $\parcondxp{X}{P}{X_0}{M}{G}$, then $X^*$ is a separating set that satisfies $\parcondxp{X^*}{P}{X_0}{M}{G}$, where $X^* := X \cup \bigcup \{ V(H): H\in\mathcal{G}(G), V(H)\cap X \neq \emptyset\if0, V(H)\cap X_0 = \emptyset \fi \}$. Accordingly, if $X_0 = V(G_1)$ for some $G_1\in\mathcal{G}(G)$, then $X^*\cup V(G_1)$ is a critical-inducing set for $G_1$. \end{lemma} \begin{proof} First confirm that $X^*$ is disjoint from $X_0$ and is separating. Obviously, for each $x\in X$, any path in $\pathfamily{x}{X}{P}{X_0}{M}{G}$ is also a path in $\pathfamily{x}{X^*}{P}{X_0}{M}{G}$, and therefore $\pathfamily{x}{X^*}{P}{X_0}{M}{G} \neq \emptyset$. Hence, it suffices to prove $\pathfamily{x}{X^*}{P}{X_0}{M}{G} \neq \emptyset$ for each $x\in V(H)\setminus X$, where $H$ is a factor-component with $V(H)\cap X \neq \emptyset$. As $X\subseteq C(G)$ holds, Proposition~\ref{prop:fcomp2dac} implies that $H$ is consistent. From Lemma~\ref{lem:component2cut}, there is an $M$-forwarding path $R$ from $x$ to a vertex $y \in X$ with $V(R)\setminus \{y\} \subseteq V(H)\setminus X$. For $Q \in \pathfamily{y}{X}{P}{X_0}{M}{G}$, the concatenation $R + Q$ is a path in $\pathfamily{y}{X^*}{P}{X_0}{M}{G}$. Thus, we obtain $\parcondxp{X^*}{P}{X_0}{M}{G}$. Accordingly, the remaining claim of the lemma also follows from Lemmas~\ref{lem:path2base} and \ref{lem:int2root}. \qed \end{proof} Note also the following observation about $\Psi$, which is used in the proof of Lemma~\ref{lem:order}. \begin{lemma} \label{lem:extension} Let $G$ be a graph, $M$ be a maximum matching of $G$, and $X_0$ be a nonempty proper subset of $V(G)$. \begin{rmenum} \item \label{item:extension:ground} If $P_0$ is an $M$-ear relative to $X_0$ with $\earint{P_0} \neq \emptyset$, then $\parcondxp{\earint{P_0}}{P_0}{X_0}{M}{G}$ holds. \item \label{item:extension:extension} Let $X$ and $P$ be such that $\parcondxp{X}{P}{X_0}{M}{G}$ holds, and let $Q$ be an $M$-ear relative to $X$ with $\earint{Q}\neq\emptyset$ and $\earint{Q}\cap X_0 = \emptyset$. Then, $\parcondxp{X\cup\earint{Q}}{P}{X_0}{M}{G}$ also holds. \end{rmenum} \end{lemma} \begin{proof} The property $\parcondxp{\earint{P_0}}{P_0}{X_0}{M}{G}$ holds because each vertex $y\in \earint{P_0}$ forms a trivial $M$-forwarding path of $\pathfamily{y}{\earint{P_0}}{P_0}{X_0}{M}{G}$. For each $x\in X$, obviously $\pathfamily{x}{X\cup \earint{Q}}{P}{X_0}{M}{G} \neq \emptyset$ holds. For each $x\in \earint{Q}$, let $w$ be the end of $Q$ such that $xQw$ is an $M$-forwarding path from $x$ to $w$. For $R\in \pathfamily{w}{X}{P}{X_0}{M}{G}$, $xQw + R$ is a path of $\pathfamily{x}{X\cup\earint{Q}}{P}{X_0}{M}{G}$. Thus, $\parcondxp{X\cup\earint{Q}}{P}{X_0}{M}{G}$ is proved. \qed \end{proof} The next lemma is the key to Theorem~\ref{thm:order}. \begin{lemma}\label{lem:order} Let $G$ be a graph and $M$ be a maximum matching of $G$. Let $G_1, G_2\in\mathcal{G}(G)$ be such that $G_1\neq G_2$ and $G_1\yield G_2$ hold. Then there exists a set of vertices $X\subseteq V(G)$ and an $M$-ear $P$ relative to $G_1$ such that $V(G_2)\subseteq X$ and $\parcondxp{X}{P}{G_1}{M}{G}$ hold. \end{lemma} \begin{proof} Let $X\subseteq V(G)$ be a critical-inducing set for $G_1$ to $G_2$. Define a family $\mathcal{Y}\subseteq 2^{X\setminus V(G_1)}$ as follows: A set of vertices $W$ is a member of $\mathcal{Y}$ if $W$ is a (inclusion-wise) maximal subset of $X\setminus V(G_1)$ that satisfies $\parcondxp{W}{P}{G_1}{M}{G}$ for some $M$-ear $P$ relative to $G_1$. Let $X' := V(G_1) \cup \bigcup_{W \in\mathcal{Y}} W = \bigcup_{W \in\mathcal{Y}} V(G_1) \cup W$. Lemma~\ref{lem:compclosure} implies that, for each $W \in \mathcal{Y}$, $V(G_1) \cup W$ is a critical-inducing set for $G_1$. Accordingly, from Lemma~\ref{lem:union}, $X'$ is also a critical-inducing set for $G_1$. We prove this lemma by showing $V(G_2)\subseteq X'$. Suppose the contrary. Then, $X' \subsetneq X$ holds. From Lemma~\ref{lem:inductive-ear}, there exists an $M$-ear $Q$ relative to $X'$ such that $\earint{Q}\neq \emptyset$ and $\earint{Q}\subseteq X\setminus X'$ hold. In the following, note that if any $W\subseteq X\setminus V(G_1)$ with $W \cap \earint{Q} \neq \emptyset$ satisfies $\parcondxp{W}{P}{G_1}{M}{G}$ for some $M$-ear $P$, then it contradicts the definition of $\mathcal{Y}$ under the current hypothesis. Let $\mathcal{Y}^* := \mathcal{Y} \cup \{V(G_1)\}$. First consider the case where both ends of $Q$ are contained in the same member of $\mathcal{Y}^*$, say, $W$. If $W\in \mathcal{Y}$ holds, then, from Lemma~\ref{lem:extension}\ref{item:extension:extension}, $W \cup \earint{Q}$ is a set of vertices that satisfies $\parcondxp{W \cup \earint{Q}}{P}{G_1}{M}{G}$, where $P$ is an $M$-ear with $\parcondxp{W}{P}{G_1}{M}{G}$; otherwise, that is, if $W = V(G_1)$, then Lemma~\ref{lem:extension}\ref{item:extension:ground} implies $\parcondxp{\earint{Q}}{Q}{G_1}{M}{G}$. This is a contradiction. Next consider the case where two ends $u_1$ and $u_2$ of $Q$ are contained in distinct members of $\mathcal{Y}^*$, say, $W_1$ and $W_2$, respectively. For each $i\in \{1, 2\}$, if $W_i$ is a member of $\mathcal{Y}$, then let $P_i$ be an $M$-ear relative to $G_1$ such that $\parcondxp{W_i}{P_i}{G_1}{M}{G}$ holds; according to Lemma~\ref{lem:int2root}, there exists an $M$-forwarding path $R_i$ from $u_i$ to an end of $P_i$, say, $r_i$. Otherwise, that is, if $W_i = V(G_1)$, let $R_i$ be the trivial $M$-forwarding path that consists solely of $u_i$, and let $r_i := u_i$. If $(V(R_1)\setminus \{r_1\}) \cap (V(R_2)\setminus \{r_2\}) = \emptyset$, then let $\hat{Q} := R_1 + Q + R_2$. From Lemma~\ref{lem:extension}, $\hat{Q}$ is an $M$-ear relative to $G_1$ such that $\parcondxp{\earint{\hat{Q}}}{\hat{Q}}{G_1}{M}{G}$ holds, which is a contradiction. Otherwise, that is, if $(V(R_1)\setminus \{u_1\}) \cap (V(R_2)\setminus \{u_2\}) \neq \emptyset$, then we have $W_1, W_2 \in \mathcal{Y}$. Trace $R_2$ from $u_2$, and let $x$ be the first vertex we encounter that is in $W_1$. Then, from Lemma~\ref{lem:cut2forwarding}, $u_2R_2x$ is an $M$-forwarding path from $u_2$ to $x$, and therefore $Q + u_2R_2x$ is an $M$-ear relative to $W_1$. Thus, this case is reduced to the first case, and we are again lead to a contradiction. Hence, we obtain $X = X'$, and therefore $V(G_2)$ is contained in some member of $\mathcal{Y}$. This completes the proof. \qed \end{proof} We can finally prove that $\yield$ is a partial order. \begin{theorem}\label{thm:order} For any graph $G$, the binary relation $\yield$ is a partial order on $\mathcal{G}(G)$. \end{theorem} \begin{proof} Reflexivity is obvious from the definition. Transitivity follows from Lemma~\ref{lem:transitivity}. Hence, we prove antisymmetry. Let $G_1, G_2\in \mathcal{G}(G)$ be factor-components with $G_1\yield G_2$ and $G_2\yield G_1$. Suppose antisymmetry fails, that is, $G_1 \neq G_2$ holds. Note that, from Lemma~\ref{lem:order2const}, $G_1$ is consistent. Let $M$ be a maximum matching of $G$. From Lemma~\ref{lem:order}, there exist a set of vertices $X\subseteq V(G)$ with $V(G_2)\subseteq X$ and an $M$-ear $P_1$ relative to $G_1$ that satisfy $\parcondxp{X}{P_1}{G_1}{M}{G}$. Let $u_1$ and $v_1$ be the ends of $P_1$. By Lemma~\ref{lem:path2base}, there exists $w\in V(G_2)$ such that there is an $M$-forwarding path $Q$ from $u_1$ to $w$. Trace $Q$ from $u_1$, and let $x$ be the first vertex in $(X\cup \{v_1\})\setminus \{u_1\}$ that we encounter; such a vertex exists because $V(G_2)\subseteq X$ holds. \begin{cclaim} Without loss of generality, we can assume that $x\neq v_1$, that is, $x\in X$ holds and $u_1Qx$ is a path with $v_1\not\in V(u_1Qx)\setminus\{u_1\}$, which is $M$-forwarding from $u_1$ to $x$. \end{cclaim} \begin{proof} Suppose the claim fails, that is, $x = v_1$ holds. Then, $u_1\neq v_1$ holds by the definition of $x$. If $u_1Qv_1$ is an $M$-saturated path, then $P_1 + u_1Qv_1$ forms an $M$-alternating circuit that contains the non-allowed edges in $E(P_1) \cap \delta(G_1)$, which contradicts Lemma~\ref{lem:allowed}. Otherwise, that is, if $u_1Qv_1$ is an $M$-forwarding path from $u_1$ to $v_1$, then $v_1Qw$ is an $M$-forwarding path from $v_1$ to $w$ that is disjoint from $u_1$. Redefine $x$ as the first vertex in $X$ that we encounter if we trace $v_1Qw$ from $v_1$. Then, $v_1Qx$ is a path that is disjoint from $u_1$ and is $M$-forwarding from $v_1$ to $x$, according to Lemma~\ref{lem:cut2forwarding}. Therefore, by swapping the roles of $u_1$ and $v_1$, without loss of generality, we obtain this claim. \qed \end{proof} Therefore, hereafter let $x \in X$ and let $u_1Qx$ be an $M$-forwarding path from $u_1$ to $x$ with $v_1\not\in V(u_1Qx)\setminus\{u_1\}$. As $x\in X$ holds, $\parcondxp{X}{P}{G_1}{M}{G}$ implies that there is an $M$-forwarding path $R$ from $x$ to an internal vertex of $P_1$, say, $y$, such that $V(R)\subseteq X$ and $V(R)\cap V(P_1) = \{y\}$. If $u_1P_1y$ has an even number of edges, then $u_1Qx + xRy + yP_1u_1$ is an $M$-alternating circuit that contains some non-allowed edges, say, the edges in $E(P_1)\cap \delta(u_1)$, which contradicts Lemma~\ref{lem:allowed}. Hence hereafter we assume that $u_1P_1y$ has an odd number of edges. From Proposition~\ref{prop:nonpositive}, there is a path $L$ of $G_1$ that is $M$-saturated between $v_1$ and $u_1$ or $M$-forwarding from $v_1$ to $u_1$. Trace $L$ from $v_1$, and let $z$ be the first vertex on $u_1Qx$; note that Lemma~\ref{lem:cut2forwarding} implies that $v_1Lz$ is an $M$-forwarding path from $v_1$ to $z$. Additionally, note that $L$ is disjoint from $X$, because $V(L)\subseteq V(G_1)$ holds and $X$ is disjoint from $V(G_1)$. If $u_1Qz$ has an odd number of edges, then $zQu_1 + P_1 + v_1Lz$ is an $M$-alternating circuit that contains non-allowed edges, say, the edges in $E(P_1)\cap \delta(G_1)$, which contradicts Lemma~\ref{lem:allowed}. If $u_1Qz$ has an even number of edges, then $v_1Lz+ zQx + xRy + yP_1u_1$ is an $M$-alternating circuit, which is again a contradiction. Thus we obtain $G_1 = G_2$, and the theorem is proved. \qed \end{proof} \begin{remark} The partially ordered structure determined by $\yield$ is not a generalization of the Dulmage-Mendelsohn decomposition. We can confirm that $\yield$ is determined totally unique for a graph, whereas the partial order for the Dulmage-Mendelsohn decomposition depends on the choice of color classes. If a graph $G$ is bipartite, then the poset $(\mathcal{G}(G), \yield)$ has a trivial structure. In our next work~\cite{kita2012c}, we give a generalization of the Dulmage-Mendelsohn decomposition using the results in this paper. \end{remark} \begin{remark} From Lemma~\ref{lem:order2const}, any inconsistent factor-component is minimal with respect to $\yield$. \end{remark} We call the partial order $\yield$ the {\em basilica order}. \section{Generalization of the Kotzig-Lov\'asz decomposition} \label{sec:part} In this section, we give a generalization of the Kotzig-Lov\'asz decomposition for general graphs. Given a graph $G$, the {\em deficiency} of $G$ is the number $|V(G)|-2\nu(G)$ and is denoted by $\deficiency{G}$, where $\nu(G)$ is the size of a maximum matching. That is, $\deficiency{G}$ is the number of vertices exposed by a maximum matching. Note that $\deficiency{G} = 0$ if and only if $G$ is factorizable. \begin{definition} Let $G$ be a graph. For $u,v\in V(G)$, we say $u\gsim{G} v$ if $u = v$ holds or if $u$ and $v$ are contained in the same factor-component and $\deficiency{G-u-v} > \deficiency{G}$ holds. \end{definition} By definition, if a graph $G$ is consistently factor-connected then the binary relation $\gsim{G}$ coincides with $\sim$ given by Kotzig~\cite{kotzig1959a, kotzig1959b, kotzig1960}. We prove in Theorem~\ref{thm:generalizedcanonicalpartition} that $\gsim{G}$ is an equivalence relation. Lemmas~\ref{lem:d2single} to \ref{lem:a2sim} in the following are used to prove Theorem~\ref{thm:generalizedcanonicalpartition}. These lemmas relate the deficiency and the Gallai-Edmonds family, and can be observed more easily from the Gallai-Edmonds structure theorem. However, we prove them without using the Gallai-Edmonds structure theorem to keep our results self-contained. The next lemma implies that each vertex in $D(G)$ forms an equivalence class that is a singleton. \begin{lemma} \label{lem:d2single} Let $G$ be a graph, and let $u\in D(G)$. Then, for any $v\in V(G)\setminus \{u\}$, $\deficiency{G-u-v} \le \deficiency{G}$ holds. \end{lemma} \begin{proof} As $u\in D(G)$ holds, $\deficiency{G-u} = \deficiency{G}-1$. Obviously, $|\deficiency{G-u-v} - \deficiency{G-u}| = 1$. Hence, $\deficiency{G-u-v} \le \deficiency{G-u} + 1 = \deficiency{G}$. \qed \end{proof} The next lemma will be used in both Lemma~\ref{lem:a2sim} and Theorem~\ref{thm:generalizedcanonicalpartition}. \begin{lemma}\label{lem:def2saturated} Let $G$ be a graph and $M$ be a maximum matching of $G$, and let $u, v\in V(G)\setminus D(G)$ be two distinct vertices. Then $\deficiency{G-u-v} \le \deficiency{G}$ holds if and only if there exists an $M$-saturated path between $u$ and $v$. \end{lemma} \begin{proof} We first prove the necessity. Let $P$ be an $M$-saturated path between $u$ and $v$. Then, $M\triangle E(P)$ is a matching of $G-u-v$ that covers any vertex that $M$ covers other than $u$ and $v$. Hence, $\deficiency{G-u-v} \le \deficiency{G}$ holds. Next, we prove the sufficiency. If $uv$ is an edge in $M$, then the claim obviously holds. Hence, in the following, we assume $uu', vv'\in M$ for some $u', v' \in V(G)\setminus \{u,v\}$. Then, $M\setminus \{uu', vv'\}$ is a matching of $G-u-v$ but is not maximum, because it exposes the vertices in $S \cup \{u', v'\}$, where $S$ is the set of vertices that $M$ exposes. Hence, $G-u-v$ has an $M$-exposed path $P$ whose ends are in $S \cup \{u', v'\}$. If both ends are in $S$, then $M\triangle E(P)$ is a bigger matching of $G$ than $M$, which is a contradiction. If one end $x$ is in $S$ and the other is equal to either $u'$ or $v'$, say, $u'$, then, in $G$, $P + uu'$ is an $M$-forwarding path from $u$ to $x$. This implies $u\in D(G)$ from Lemma~\ref{lem:da2path} \ref{item:da2path:d}, which is a contradiction. Therefore, the ends of $P$ are $u'$ and $v'$, and $P+ uu' + vv'$ is an $M$-saturated path between $u$ and $v$. \qed \end{proof} The next lemma implies that $A(G)\cap V(H)$ forms an equivalence class for each $H\in\inconst{G}$. \begin{lemma} \label{lem:a2sim} Let $G$ be a graph, and let $M$ be a maximum matching of $G$. For any $u,v\in A(G)$, $\deficiency{G-u-v} > \deficiency{G}$ holds. \end{lemma} \begin{proof} If $u=v$, then the claim obviously holds. Hence, assume $u\neq v$ and suppose the claim fails, that is, suppose $\deficiency{G-u-v} \le \deficiency{G}$. Then, by Lemma~\ref{lem:def2saturated}, there exists an $M$-saturated path $P$ between $u$ and $v$. By Lemma~\ref{lem:da2path} \ref{item:da2path:a}, there is an $M$-exposed path $Q$ between $u$ and a vertex $x$ exposed by $M$. Trace $Q$ from $x$, and let $y$ be the first encountered vertex in $V(P)$. Then, $xQy$ is an $M$-forwarding path from $x$ to $y$, and, for a vertex $w$ that is equal to either $u$ or $v$, the path $xQy + yPw$ is an $M$-forwarding path from $w$ to $x$. This implies $w\in D(G)$ according to Lemma~\ref{lem:da2path} \ref{item:da2path:d}, which is a contradiction. Hence, we obtain $\deficiency{G-u-v} > \deficiency{G}$. \qed \end{proof} The next theorem presents our generalization of the Kotzig-Lov\'asz decomposition. \begin{theorem} \label{thm:generalizedcanonicalpartition} For any graph $G$, the binary relation $\gsim{G}$ is an equivalence relation on $V(G)$. \end{theorem} \begin{proof} Reflexivity and symmetry obviously hold by definition. We prove transitivity in the following. Let $u, v, w\in V(H)$ be such that $u\gsim{G} v$ and $v\gsim{G} w$. If any two among $u, v, w$ are identical, clearly the claim follows. Therefore, it suffices to consider the case that they are mutually distinct. If $H$ is inconsistent, then, from Lemma~\ref{lem:d2single}, $u, v, w \in A(G)$ follows. Thus, from Lemma~\ref{lem:a2sim}, $u\gsim{G} w$ is obtained. Therefore, in the remainder of this proof, we assume that $H$ is consistent. Suppose that the claim fails, that is, $u\not\gsim{G} w$. From Lemma~\ref{lem:def2saturated}, there is an $M$-saturated path $P$ between $u$ and $w$. By Proposition~\ref{prop:nonpositive}, there is an $M$-\zero path $Q$ from $v$ to $u$. Trace $Q$ from $v$ and let $x$ be the first vertex we encounter in $V(Q)\cap V(P)$. If $uPx$ has an odd number of edges, then $vQx + xPu$ is an $M$-saturated path between $u$ and $v$, which is a contradiction. If $uPx$ has an even number of edges, then $xPw$ has an odd number of edges, and by the same argument we have a contradiction. \qed \end{proof} If a graph $G$ is consistently factor-connected, then the family of equivalence classes under $\gsim{G}$, that is, $V(G)/\gsim{G}$, coincides with the original Kotzig-Lov\'asz decomposition~\cite{kotzig1959a, kotzig1959b, kotzig1960}. Therefore, for a general graph $G$, we denote $V(G)/\gsim{G}$ by $\gpart{G}$, and call it the {\em generalized Kotzig-Lov\'asz decomposition} or simply the {\em Kotzig-Lov\'asz decomposition}. By the definition of $\gsim{G}$, each equivalence class is contained in some factor-component. Therefore, for each $H\in\mathcal{G}(G)$, the family $\{ S\in \gpart{G} : S\subseteq V(H)\}$ is a partition of $V(H)$; we denote this partition by $\pargpart{G}{H}$. The next statement shows that our generalization of the Kotzig-Lov\'asz decomposition provides information that the classical Kotzig-Lov\'asz decomposition does not. \begin{observation} \label{prop:refinement} For a factorizable graph $G$ and a factor-component $H\in \mathcal{G}(G)$, the partition $\pargpart{G}{H}$ is a refinement of $\gpart{H}$; that is, if two vertices $u, v\in V(H)$ satisfy $u \gsim{G} v$ in $G$, then $u\sim v$ holds in $H$. \end{observation} In general, $\pargpart{G}{H}$ is a proper refinement of $\gpart{H}$. Therefore, our generalization of the Kotzig-Lov\'asz decomposition is not trivial; that is, $\gpart{G}$ is not merely a disjoint union of the Kotzig-Lov\'asz decomposition of each factor-component. Our proof of Theorem~\ref{thm:generalizedcanonicalpartition} provides a shortened and self-contained proof of the classical Kotzig-Lov\'asz decomposition. Kotzig's proof consists of three papers, so proving that $\sim$ is an equivalence relation from first principles has been considered challenging~\cite{lp1986}. Lov\'asz's proof uses the Gallai-Edmonds structure theorem, and, accordingly, is not self-contained. However, in fact, it can be proved in a simple way even without the premise of the Gallai-Edmonds structure theorem or the notion of barriers. All the results used to obtain Theorem~\ref{thm:generalizedcanonicalpartition} are self-contained in this paper. Lov\'asz~\cite{lovasz1972b} reformulated the classical Kotzig-Lov\'asz decomposition using the notion of {\rm barriers}~\cite{lp1986}. In our next paper~\cite{DBLP:conf/cocoa/Kita13, kita2012canonical}, we discuss the relationship between barriers and our generalized Kotzig-Lov\'asz decomposition, and show that our decomposition also provides a generalization of Lov\'asz's formulation. The next observation follows from Lemmas~\ref{lem:d2single} and \ref{lem:a2sim}. \begin{observation} \label{note:inconstpart} Let $G$ be a graph, and let $H\in\inconst{G}$. Then, $\pargpart{G}{H} = \{A(G)\cap V(H)\} \cup \bigcup\{ \{x\} : x\in D(G)\cap V(H)\}$. \end{observation} \begin{observation} Let $G$ be a graph. For a factor-component $H\in\comp{G}$, $\pargpart{G}{H}$ consists of only a single member if and only if $| V(H) | = 1$, which implies that its only vertex is in $D(G)$. \end{observation} \begin{remark} An alternative way to define $\gsim{G}$ is the following. Given a graph $G$, for any $u,v\in V(G)\setminus D(G)$, we say $u\gsim{G} v$ if $u$ and $v$ are contained in the same factor-component and $\deficiency{G-u-v} > \deficiency{G}$ holds. Obviously, $\gsim{G}$ is also an equivalence relation over $V(G)\setminus D(G)$, and its equivalence classes coincide with those given in this section, except for the trivial classes over $D(G)$. We prefer this formulation for the generalized Kotzig-Lov\'asz decomposition given the nature of matchings shown in our next paper~\cite{DBLP:conf/cocoa/Kita13, kita2012canonical}. However, in this paper, we employ the other formulation. \end{remark} \section{Basilica Type Relationship and Definition of New Canonical Decomposition} \label{sec:cor} \subsection{Relationship between $\yield$ and $\gsim{G}$} \label{sec:cor:cor} There is a relationship between the partial order $\yield$ and the generalized Kotzig-Lov\'asz decomposition, even though they are given independently. We state this relationship in Theorem~\ref{thm:base}, using the following definitions and lemmas. \begin{definition} Let $G$ be a graph, and let $H\in\mathcal{G}(G)$. We denote by $\parupstar{G}{H}$ the set of upper bounds of $H$ in the poset $(\mathcal{G}(G), \yield)$; that is, $\parupstar{G}{H} := \{ H'\in\mathcal{G}(G): H\yield H'\}$. We define $\parup{G}{H} := \parupstar{G}{H}\setminus \{H\}$ and denote by $\vparupstar{G}{H}$ and $\vparup{G}{H}$ the sets of vertices that are contained in the factor-components in $\parupstar{G}{H}$ and in $\parup{G}{H}$, respectively; that is, $\vparupstar{G}{H} := \bigcup_{H'\in\parupstar{G}{H}} V(H')$ and $\vparup{G}{H} := \bigcup_{H'\in\parup{G}{H}} V(H')$. We often omit the subscripts ``$G$'' if they are apparent from the context. \end{definition} \begin{lemma}\label{lem:nonrefinable} Let $G$ be a graph and $M$ be a maximum matching of $G$, and let $G_1, G_2 \in \mathcal{G}(G)$ be distinct factor-components. If there exists an $M$-ear $P$ with $\earint{P} \subseteq C(G)$ that is relative to $G_1$ and traverses $G_2$, then $G_1\yield G_2$ holds. Accordingly, any factor-component traversed by $P$ is an upper bound of $G_1$. \end{lemma} \begin{proof} As stated in Lemma~\ref{lem:inductive-ear}, $\parcondxp{\earint{P}}{P}{G_1}{M}{G}$ holds. Thus, from Lemma~\ref{lem:compclosure}, using $\earint{P}$, we can construct a critical-inducing set for $G_1$ to $G_2$. Thus, $G_1\yield G_2$ holds, and accordingly the remaining statement is also obtained. \qed \end{proof} \begin{lemma}\label{lem:ear-base} Let $G$ be a graph and $M$ be a matching of $G$, and let $H\in \mathcal{G}(G)$. Let $P$ be an $M$-ear relative to $H$ with end vertices $u, v \in V(H)$ and with $\earint{P}\subseteq C(G)$. Then $u\gsim{G} v$ holds. \end{lemma} \begin{proof} First, note $\earint{P}\subseteq \vup{H}$ according to Lemma~\ref{lem:nonrefinable}. Hence, if $H$ is an inconsistent factor-component, then $u, v\in A(G)\cap V(H)$ holds. Therefore, we obtain $u\gsim{G} v$ from Lemma~\ref{lem:a2sim}. Hence, in the following, assume that $H$ is consistent. Suppose the claim fails, that is, $u \not\gsim{G} v$ holds. Then, from Lemma~\ref{lem:def2saturated}, there is an $M$-saturated path $Q$ between $u$ and $v$. Trace $Q$ from $u$, and let $x$ be the first vertex we encounter that is in $V(P)\setminus \{u\}$. If $x = v$, then $Q + P$ is an $M$-alternating circuit that contains some non-allowed edges of $\parcut{G}{H}$, which contradicts Lemma~\ref{lem:allowed}. Hence, we assume $x\in \earint{P}\setminus \{u\}$ in the following. If $uPx$ has an even number of edges, then $uQx + xPu$ is an $M$-alternating circuit with some non-allowed edges of $\parcut{G}{H}$, which is again a contradiction. Hence, we assume that $uPx$ has an odd number of edges. Let $I\in \mathcal{G}(G)$ be the factor-component that contains $x$. The connected components of $uQx + xPu - E(I)$ are $M$-ears relative to $I$, and one of them traverses $H$. This implies $I\yield H$ under Lemma~\ref{lem:nonrefinable}, which contradicts $H\yield I$ under Theorem~\ref{thm:order}. \qed \end{proof} \begin{lemma} \label{lem:base} Let $G$ be a graph and $M$ be a maximum matching of $G$, and let $G_0\in\mathcal{G}(G)$. Let $X\subseteq C(G)$ be a set of vertices such that $\parcondxp{X}{P}{G_0}{M}{G}$ holds for some $M$-ear $P$ relative to $G_0$. Then, \begin{enumerate} \renewcommand{\labelenumi}{\theenumi} \renewcommand{\labelenumi}{{\rm \theenumi}} \renewcommand{\theenumi}{(\roman{enumi})} \item \label{item:k} there exists a connected component $K$ of $G[\vup{G_0}]$ with $X \subseteq V(K)$; and, \item \label{item:n} there exists $T\in\pargpart{G}{G_0}$ such that $\parNei{G}{X} \cap V(G_0) \subseteq T$ holds. \end{enumerate} \end{lemma} \begin{proof} As Lemma~\ref{lem:compclosure} states that $X$ is contained in a critical-inducing set for $G_0$, we have $X\subseteq \vup{G_0}$. Additionally, $\parcondxp{X}{P}{G_0}{M}{G}$ implies that $G[X]$ is connected. Therefore, \ref{item:k} follows. Let $u$ and $v$ be the ends of $P$. From Lemma~\ref{lem:ear-base}, there exists $T\in\pargpart{G}{G_0}$ with $\{u, v\} \subseteq T$. Let $w \in \parNei{G}{X}\cap V(G_0)$, and let $z\in X$ be a vertex with $wz\in E(G)$. From Lemma~\ref{lem:int2root}, there exists an $M$-forwarding path $Q$ from $z$ to $r\in \{u, v\}$ with $V(Q)\setminus \{r\} \subseteq X$. Then, $wz + Q$ forms an $M$-ear relative to $G_0$ whose ends are $w$ and $r$. Therefore, from Lemma~\ref{lem:ear-base}, $w\in T$ follows, and we have \ref{item:n}. \qed \end{proof} The relationship between the basilica order and generalized Kotzig-Lov\'asz decomposition is shown in the next theorem. \begin{theorem}\label{thm:base} Let $G$ be a graph, and let $G_0\in\mathcal{G}(G)$. For each connected component $K$ of $G[\vup{G_0}]$, there exists $T_K\in\pargpart{G}{G_0}$ such that $\parNei{G}{K}\cap V(G_0)\subseteq T_K$. \end{theorem} \begin{proof} Let $M$ be a maximum matching of $G$. Define a family $\mathcal{X}\subseteq 2^{V(K)}$ as follows: $X\subseteq V(K)$ is a member of $\mathcal{X}$ if $\parcondxp{X}{P}{G_0}{M}{G}$ holds for some $M$-ear $P$ relative to $G_0$. \begin{cclaim} It holds that $\bigcup_{X\in\mathcal{X}} X = V(K)$. \end{cclaim} \begin{proof} From the definition of $\mathcal{X}$, clearly $\bigcup_{X\in\mathcal{X}} X \subseteq V(K)$. In contrast, $V(K)$ is obviously separating, and, from Lemma~\ref{lem:order}, each factor-component that composes $V(K)$ is contained in a set of vertices $X$ with $\parcondxp{X}{P}{G_0}{M}{G}$ for some $M$-ear $P$ relative to $G_0$; from Lemma~\ref{lem:base}, this $X$ satisfies $X\subseteq V(K)$. Therefore, we have $\bigcup_{X\in\mathcal{X}} X \supseteq V(K)$. \qed \end{proof} For each $T\in\pargpart{G}{G_0}$, we define $\mathcal{X}_T \subseteq \mathcal{X}$ as follows: $X\in\mathcal{X}$ is a member of $\mathcal{X}_T$ if $\parNei{G}{X}\cap V(G_0) \subseteq T$ holds. From Lemma~\ref{lem:base}, if $S\neq T$, then $\mathcal{X}_S \cap \mathcal{X}_T = \emptyset$; additionally, $\bigcup_{T\in\pargpart{G}{G_0}} \mathcal{X}_T = \mathcal{X}$ holds. \begin{cclaim} \label{claim:disjoint} Let $S, T\in \pargpart{G}{G_0}$. Let $X \in \mathcal{X}_S$ and $Y \in \mathcal{X}_T$. If $X \cap Y \neq \emptyset$, then $S = T$. If $X \cap Y = \emptyset$ and $E[X, Y] \neq \emptyset$, then $S = T$. \end{cclaim} \begin{proof} First assume $X \cap Y \neq \emptyset$. As both $X$ and $Y$ are closed with respect to $M$, so is $X\cap Y$. Take $x\in X\cap Y$ arbitrarily; from Lemma~\ref{lem:int2root}, we have an $M$-forwarding path $Q$ from $x$ to a vertex $r\in V(G_0)$ with $V(Q)\setminus \{r\} \subseteq X$; from Lemma~\ref{lem:base}, we have $r\in S$. Trace $Q$ from $r$, and let $y$ be the first vertex we encounter that is in $\parNei{G}{X\cap Y}$; let $z\in X\cap Y$ be such that $yz\in E[X\setminus Y, X\cap Y]$. Here, $rQy$ is an $M$-forwarding path with $V(rQy)\setminus \{r\} \subseteq X\setminus Y$. By contrast, we also have an $M$-forwarding path $R$ from $z$ to a vertex $s\in T$ with $V(R)\setminus \{s\} \subseteq Y$. Here, $rQy + yz + R$ is an $M$-ear relative to $G_0$ with ends $r$ and $s$. From Lemma~\ref{lem:ear-base}, $S = T$ follows. Next, assume $X \cap Y = \emptyset$ and $E[X, Y] \neq \emptyset$. Let $t_1\in X$ and $t_2 \in Y$ be vertices with $t_1t_2\in E[X, Y]$. From Lemma~\ref{lem:int2root}, for each $i\in\{1,2\}$, we have an $M$-forwarding path $L_i$ from $t_i$ to a vertex $r_i\in V(G_0)$ with $V(L_1)\setminus\{r_1\}\subseteq X$ and $V(L_2)\setminus\{r_2\}\subseteq Y$; from Lemma~\ref{lem:base}, we have $r_1\in S$ and $r_2\in T$. Therefore, $L_1 + t_1t_2 + L_2$ forms an $M$-ear relative to $G_0$ with ends $r_1$ and $r_2$. From Lemma~\ref{lem:ear-base}, again $S = T$ follows. \qed \end{proof} As $K$ is connected, Claim~\ref{claim:disjoint} implies $|\{ T\in\pargpart{G}{G_0} : \mathcal{X}_T \neq \emptyset\}| = 1$. This completes the proof. \qed \end{proof} \subsection{Declaration of New Canonical Decomposition} \label{sec:cor:dec} We can now declare a new canonical decomposition in which the basilica order and the generalized Kotzig-Lov\'asz decomposition are unified through Theorem~\ref{thm:base}. According to Theorem~\ref{thm:base}, the strict upper bounds on a factor-component are each ``attached'' or ``assigned'' to an equivalence class of the generalized Kotzig-Lov\'asz decomposition. That is, let $H$ be a factor-component of a graph $G$, let $I\in\comp{G}\setminus \{H\}$ be such that $H\yield I$, and let $K$ be the connected component of $G[\vup{H}]$ with $V(I)\subseteq V(K)$. If $S\in\pargpart{G}{H}$ is such that $\parNei{G}{K}\cap V(H)\subseteq S$ as in Theorem~\ref{thm:base}, then we can view $I$ as being ``attached'' or ``assigned'' to $S$ as an upper bound on $H$. Hence, a graph can be regarded as being constructed by repeatedly assigning and attaching each factor-component to an equivalence class possessed by a lower bound. Although the basilica order structure and the generalized Kotzig-Lov\'asz decomposition themselves can be considered individually as canonical decompositions, they are integrated into a single theory of a canonical decomposition through the relationship given by Theorem~\ref{thm:base}. We call this integrated concept the {\em basilica decomposition}, because this evokes the idea of a graph being structured like an architectural building. The term ``basilica'' comes from the {\em cathedral theorem} by Lov\'asz~\cite{lovasz1972b, lp1986}, which is an inductive characterization of {\em saturated graphs}. In fact, the cathedral theorem can be derived from our new canonical decomposition~\cite{kita2014alternative}. \section{Inconsistent Factor-components Via Gallai-Edmonds Structure Theorem} In this section, we use the Gallai-Edmonds structure theorem to obtain futher information about the inner structure of inconsistent factor-components. \begin{lemma}\label{lem:forwarding2allowed} Let $G$ be a graph, $M$ be a maximum matching of $G$, and $r$ be a vertex exposed by $M$. If $P$ is an $M$-forwarding path from some vertex to $r$, then all edges of $P$ are allowed and therefore $P$ is contained in a factor-component. \end{lemma} \begin{proof} If $P$ is such a path, then $M\triangle E(P)$ is also a maximum matching of $G$. Hence, the claim follows. \qed \end{proof} \begin{lemma}\label{lem:d2comp} Let $G$ be a graph. If $K$ is a connected component of $G[D(G)]$, then the vertices in $V(K)\cup \parNei{G}{K}$ are contained in the same factor-component. \end{lemma} \begin{proof According to Theorem~\ref{thm:gallaiedmonds}, $K$ is factor-critical. Let $r\in V(K)$, and let $M$ be a maximum matching of $G$ exposing $r$. Arbitrarily choose $x\in V(K)$. From Lemma~\ref{lem:path2root}, there is an $M$-forwarding path $P$ from $x$ to $r$. From Lemma~\ref{lem:forwarding2allowed}, $x$ and $r$ are contained in the same factor-component. Thus, all vertices of $K$ are contained in the same factor-component. From Theorem~\ref{thm:gallaiedmonds}, any edge in $\parcut{G}{K}$ is allowed. Therefore, the vertices in $\parNei{G}{K}$ are also contained in the same factor-component as the vertices of $K$. \qed \end{proof} The next theorem follows from Lemma~\ref{lem:d2comp} and Theorem~\ref{thm:gallaiedmonds}. \begin{theorem}\label{thm:inconst2connected} Let $G$ be a graph. Any subgraph $H$ is an inconsistent factor-component of $G$ if and only if it is a connected component of $G[D(G)\cup A(G)]\setminus E(G[A(G)])$. \end{theorem} \section{Pertinent Properties}\label{sec:pertinentprops} \subsection{Non-triviality of $\yield$} \label{sec:add} The following theorem shows that most factorizable graphs with more than one factor-components have non-trivial structures as posets. \begin{theorem}\label{thm:add} Let $G$ be a factorizable graph, $G_1, G_2 \in \mathcal{G}(G)$ be factor-components for which $G_1\yield G_2$ does not hold, and let $G_1$ be minimal in the poset $(\mathcal{G}(G), \yield)$. Then there are possibly identical complement edges $e$ and $f$ of $G$ between $G_1$ and $G_2$ with $\mathcal{G}(G + e + f) = \mathcal{G}(G)$ and $G_1\yield G_2$ in $(\mathcal{G}(G+e+f), \yield)$. \end{theorem} \begin{proof First, we prove the case where there is an edge $xy$ with $x\in V(G_1)$ and $y\in V(G_2)$. Let $M$ be a perfect matching of $G$. Choose a vertex $w\in V(G_2)$ with $w\not\sim y$ in $G_2$, and let $P$ be an $M$-saturated path of $G_2$ between $w$ and $y$. If $xw\in E(G)$ holds, then $xy + P + wx$ is an $M$-ear that is relative to $G_1$ and traverses $G_2$. This implies $G_1\yield G_2$ under Lemma~\ref{lem:nonrefinable}, which is a contradiction. Thus, $xw\not\in E(G)$ holds. Suppose $\mathcal{G}(G+xw) \neq \mathcal{G}(G)$. Then, Lemma~\ref{lem:allowed} implies that $G + xw$ has an $M$-alternating circuit that contains $xw$, hence $G$ has an $M$-saturated path $C$ between $x$ and $w$. Trace $C$ from $x$, and let $z$ be the first vertex in $V(G_2)$ that we encounter. Then, $xy + xCz$ is an $M$-ear of $G$ that is relative to $G_2$ and traverses $G_1$, which implies $G_2\yield G_1$ under Lemma~\ref{lem:nonrefinable}; this contradicts the minimality of $G_1$. Thus, $\mathcal{G}(G+xw) = \mathcal{G}(G)$, and we have proved this case. We now consider the other case, where no edge of $G$ connects $G_1$ and $G_2$. Choose $x\in V(G_1)$ and $y\in V(G_2)$ arbitrarily. If $\mathcal{G}( G + xy ) = \mathcal{G}(G)$ holds, then we can reduce it to the first case and the claim follows. Therefore, it suffices to consider the case with $\mathcal{G}(G + xy ) \neq \mathcal{G}(G)$. Then, from Lemma~\ref{lem:allowed}, for any perfect matching $M$ of $G$, $G+xy$ has an $M$-alternating circuit that contains $xy$. Thus, we have an $M$-saturated path $C$ between $x$ and $y$ in $G$. Trace $C$ from $y$, and let $u$ be the first vertex in $G_1$ that we encounter. Furthermore, trace $uCy$ from $u$, and let $v$ be the first vertex we encounter that is in $G_2$. If $\mathcal{G}(G + uv) = \mathcal{G}(G)$, then the claim follows by the same argument. Otherwise, that is, if $\mathcal{G}(G + uv) \neq \mathcal{G}(G)$, then Lemma~\ref{lem:allowed} implies that $G$ has an $M$-alternating circuit that contains $uv$. Thus, we have an $M$-saturated path $D$ between $u$ and $v$ in $G$. Trace $D$ from $u$, and let $w$ be the first vertex of $vCu - u$ that we encounter. If $wCu$ has an even number of edges, then $wCu + uDw$ is an $M$-alternating circuit of $G$ that contains non-allowed edges, which is a contradiction according to Lemma~\ref{lem:allowed}. Therefore, we assume that $wCu$ has an odd number of edges. Let $H\in\mathcal{G}(G)$ be the factor-component with $w\in V(H)$. Then, $wCu + uDw - E(H)$ is an $M$-ear that is relative to $H$ and traverses $G_1$; this implies $G_1\yield H$ from Lemma~\ref{lem:nonrefinable}, which contradicts the minimality of $G_1$. Thus, this completes the proof. \qed \end{proof} \subsection{Vertices in Upper Bounds} From Theorem~\ref{thm:base}, the following is derived rather easily. \begin{theorem}\label{thm:maximumup} Let $G$ be a graph, and let $H\in\mathcal{G}(G)$. Then, $\vupstar{H}$ is the maximum critical-inducing set for $H$; that is, the union of all the critical-inducing sets for $H$ is also a critical-inducing set for $H$ and equals $\vupstar{H}$. \end{theorem} \begin{proof} By Theorem~\ref{thm:order}, any critical-inducing set for $H$ is contained in $\vupstar{H}$. Therefore, by Lemma~\ref{lem:union}, the union of all the critical-inducing set for $H$ is also a critical-inducing set for $H$, contained in $\vupstar{H}$. Conversely, by the definition of $\upstar{H}$, for each $I\in\upstar{H}$, there is a critical-inducing set for $H$ to $I$. Therefore, $\vupstar{H}$ is contained in, and accordingly coincides with the union of all the critical-inducing sets. \qed \end{proof} Thus, we have the following as a corollary of Theorem~\ref{thm:maximumup}. \begin{corollary}\label{cor:2fc} Let $G$ be a graph, and let $H\in\mathcal{G}(G)$ and $S\subseteq \pargpart{G}{H}$. Let $K_1,\ldots, K_l$, where $l \ge 1$, be the connected components of $G[\vup{H}]$ such that $\Gamma(K_i)\cap V(H)\subseteq S$ for each $i \in \{ 1,\ldots, l\}$. Then, $G[ V(K_1)\cup\cdots\cup V(K_l) \cup S]/S$ is factor-critical. \end{corollary} \subsection{Immediate Compatible Pair of Factor-Components} \begin{lemma} \label{lem:increment} Let $G$ be a graph and $M$ be a maximum matching of $G$. Let $X\subseteq V(G)$ be an critical-inducing set for $G_1\in \comp{G}$ and let $P$ be an $M$-ear relative to $X$ with $\earint{P}\neq\emptyset$ and $\earint{P}\subseteq C(G)$. Let $Y := X \cup V(H_1)\cup \cdots \cup V(H_k)$, where $H_1,\ldots, H_k$ are the factor-components that $P$ traverses. Then, $Y$ is a critical-inducing set for $G_1$. \end{lemma} \begin{proof} According to Lemma~\ref{lem:path2base}, for each $v\in X$, there is an $M$-forwarding path $R_v$ from $v$ to a vertex in $G_1$. From Lemma~\ref{lem:extension}, $\parcondxp{\earint{P}}{P}{X}{M}{G}$ holds. Furthermore, from Lemma~\ref{lem:compclosure}, $\parcondxp{Y\setminus X}{P}{X}{M}{G}$ holds. Hence, for each $x\in Y\setminus X$, there is an $M$-forwarding path $L_x$ from $x$ to a vertex $w$ that is equal to one of the ends of $P$, according to Lemma~\ref{lem:int2root}. Therefore, $L_{x} + R_{w}$ is an $M$-forwarding path from $x$ to a vertex in $G_1$. Thus, the statement is proved by Lemma~\ref{lem:path2base}. \qed \end{proof} \begin{lemma} \label{lem:intermediate} Let $G$ be a graph and $M$ be a maximum matching of $G$. Let $X\subseteq V(G)$ be an critical-inducing set for $G_1\in \comp{G}$ and let $P$ be an $M$-ear relative to $X$ with ends $u_1$ and $u_2$. Then, there are a factor-component $H$ with $V(H)\subseteq X$ and an $M$-ear $Q$ relative to $H$ such that $E(Q)\setminus E(G[X]) = E(P)$. \end{lemma} \begin{proof} Under Lemma~\ref{lem:path2base}, for each $i\in \{1,2\}$, there is an $M$-forwarding path $Q_i$ from $u_i$ to some vertex in $V(G_1)$. Trace $Q_2$ from $u_2$, and let $x$ be the first encountered vertex in a factor-component $H$ that also has some vertices of $Q_1$; such $H$ certainly exists because $G_1$ has shares some verices with both $Q_1$ and $Q_2$. Furthermore, trace $Q_1$ from $u$, and let $z$ be the first vertex in $H$. Then, $H$ and $zQ_1u + P vQ_2y$ are desired factor-component and $M$-ear. \qed \end{proof} \begin{proposition} Let $G$ be a graph and $M$ be a maximum matching of $G$. Let $G_1$ and $G_2$ are distinct factor-components with $G_1\yield G_2$. If $G_1$ and $G_2$ are immediate, that is, for any $H\in \comp{G}$, $G_1\yield H \yield G_2$ implies $G_1 = H$ or $G_2 = H$, then there is an $M$-ear relative to $G_1$ that traverses $G_2$. \end{proposition} \begin{proof} Let $X$ be a critical-inducing set $X$ for $G_1$ to $G_2$. Let $X'$ be a maximal (in fact, the maximum) subset of $X\setminus V(G_2)$ that is critical-inducing for $G_1$; such $X'$ certainly exists because $V(G_1)\subseteq X\setminus V(G_2)$ is critical-inducing for $G_1$. Under Lemma~\ref{lem:inductive-ear}, there is an $M$-ear $P$ relative to $X'$ with $V(P)\subseteq X$ and $\earint{P} \neq \emptyset$. From Lemma~\ref{lem:increment}, the minimum separating set that contains $X'\cup V(P)$ is a critical-inducing set for $G_1$. Therefore, $P$ traverses $G_2$. Furthermore, Lemma~\ref{lem:intermediate} implies that we can use this $P$ to obtain an $M$-ear that is relative to a factor-component $H$ with $V(H)\subseteq X'$ and traverses $G_2$. Therefore, from Lemma~\ref{lem:nonrefinable}, $G_1 \yield H$ and $H \yield G_2$ hold. As $H\neq G_2$ holds, we have $H = G_1$. This completes the proof of this proposition. \qed \end{proof} \section{Algorithmic Results} \label{sec:alg} \subsection{Algorithmic Preliminaries} \label{sec:alg:pre} In the remainder of this paper, we present algorithms for computing the basilica decomposition. Section~\ref{sec:alg:pre} presents some preliminary facts that will be used in the remaining sections. We denote by $n$ and $m$ the numbers of vertices and edges of an input graph, respectively. Note that we can assume $m = \mathrm{\Omega}(n)$ and, accordingly, $O(n+m) = O(m)$ if an input graph is connected or factorizable. Section~\ref{sec:alg:comp} provides an algorithm for computing the factor-components, and then Sections~\ref{sec:alg:part} and \ref{sec:alg:order} present how to compute the generalized Kotzig-Lov\'asz decomposition and the basilica order. Each costs $O(nm)$ time, using Edmonds' maximum matching algorithm as a subroutine~\cite{edmonds1965}. \begin{theorem}[Micali and Vazirani~\cite{mv1980}, Vazirani~\cite{vazirani1994}]\label{thm:matchingalg} Given a graph, one of its maximum matchings can be computed in $O(\sqrt{n}m)$ time. \end{theorem} The following two statements can be found implicitly in Edmonds's algorithm~\cite{edmonds1965}. See also Lov\'asz and Plummer~\cite{lp1986}. \begin{theorem}[implicitly stated in Edmonds~\cite{edmonds1965}]\label{dacalg} Given a graph $G$ and a maximum matching $M$, the set $D(G)$, $A(G)$, and $C(G)$ can be computed in $O(n+m)$ time. \end{theorem} \begin{proposition}[implicitly stated in Edmonds~\cite{edmonds1965} \label{prop:rootblossom} Let $G$ be a graph and $M$ be a maximum matching of $G$, and let $r\in V(G)$ be a vertex exposed by $M$. Let $C$ be the connected component of $G[D(G)]$ that contains $r$. \begin{rmenum} \item Then, for any maximum matching $M'$ of $G$ that exposes $r$, $M'_X$ is a near-perfect matching of $C$. \item Define $\mathcal{X}\subseteq 2^{V(G)}$ as follows: $X\subseteq V(G)$ is a member of $\mathcal{X}$ if $r\in X$ holds, $G[X]$ is factor-critical, and $M_X$ is a near-perfect matching of $G[X]$, exposing $r$. Then, the maximum member of $\mathcal{X}$ is equal to $V(C)$. \item Given $G$, $M$, and $r$, $C$ can be computed in $O(m)$ time. \end{rmenum} \end{proposition} The next statement can be deduced from Edmonds' algorithm. See also Carvalho and Cheriyan~\cite{cc2005}. \begin{proposition}\label{prop:pathalg} Let $G$ be a factorizable graph and $M$ be a perfect matching of $G$, and let $u\in V(G)$. \begin{rmenum} \item The set of vertices that can be reached from $u$ by an $M$-saturated path can be computed in $O(m)$ time. \item All the allowed edges adjacent to $u$ can be computed in $O(m)$ time. \item All the factor-components of $G$ can be computed in $O(nm)$ time. \end{rmenum} \end{proposition} \subsection{Computing Factor-components} \label{sec:alg:comp} Propositions~\ref{prop:fcomp2dac} and \ref{prop:pathalg} show how to compute consistent factor-components, wheares Theorem~\ref{thm:inconst2connected} implies an algorithm for computing inconsistent factor-components. Hence, we now obtain the following: \begin{theorem} \label{thm:compalg} Given a graph $G$, one of its perfect matchings $M$, and the sets $D(G)$, $A(G)$, and $C(G)$, the factor-components of $G$ are computed in $O(nm)$ time. \end{theorem} \begin{proof} Under Proposition~\ref{prop:fcomp2dac}, we can compute $\comp{G}$ by computing $\comp{G[D(G)\cup A(G)]}$ and $\comp{G[C(G)]}$ individually. From Theorem~\ref{thm:inconst2connected}, we can compute $\comp{G[D(G)\cup A(G)]}$ in $O(n + m)$ time. From Proposition~\ref{prop:pathalg}, we can compute $\comp{G[C(G)]}$ in $O(nm)$ time. Therefore, we can obtain $\comp{G}$ in $O(nm)$ time. \qed \end{proof} \subsection{Computing the Generalized Kotzig-Lov\'asz Decomposition} \label{sec:alg:part} From Observation~\ref{note:inconstpart} and Proposition~\ref{prop:pathalg}, we can compute the generalized Kotzig-Lov\'asz decomposition. \begin{theorem} \label{thm:partalg} Given a graph $G$, one of its maximum matchings $M$, the set of factor-components $\comp{G}$, and the sets $D(G)$, $A(G)$, and $C(G)$, the generalized Kotzig-Lov\'asz decomposition of $G$ can be computed in $O(nm)$ time. \end{theorem} \begin{proof We compute $\pargpart{G}{H}$ for each $H\in\comp{G}$. According to Observation~\ref{note:inconstpart}, if $H$ is inconsistent then $\pargpart{G}{H} = \{ V(H)\cap A(G) \} \cup \bigcup \{ \{x\} : x\in V(H)\setminus A(G)\}$. Therefore, $\pargpart{G}{H}$ for all $H\in\inconst{G}$ can be computed in $O(n)$ time in total. If $H$ is consistent, we can compute $\pargpart{G}{H}$ in a similar way as the Kotzig-Lov\'asz decomposition of a consistently factor-connected graph~\cite{cc2005}. That is, for each $v\in V(H)$, compute the set of vertices $U$ that can be reached from $v$ by an $M$-saturated path, and recognize $V(H)\setminus U$ as a member of $\pargpart{G}{H}$. Each $U\in\pargpart{G}{H}$ can be computed in $O(m)$ time according to Proposition~\ref{prop:pathalg}. Therefore, computing $\pargpart{G}{H}$ for all $H\in\const{G}$ costs $O(nm)$ time. Thus, the proof is completed. \qed \end{proof} \subsection{Computing the Basilica Order} \label{sec:alg:order} In this section, we present an algorithm for computing the basilica order in $O(nm)$ time. We determine the poset by computing $\vup{H}$ for each factor-component $H$. The following lemmas are provided to associate $\vup{H}$ with Proposition~\ref{prop:rootblossom}. Lemmas~\ref{lem:nopath2comp} and \ref{lem:preserve} are used to prove Lemma~\ref{lem:vup2dcomp}. \begin{lemma} \label{lem:nopath2comp} Let $G$ be a graph and $M$ be a maximum matching of $G$. Let $H\in\comp{G}$. For no $x\in V(G)\setminus V(H)$ exposed by $M$ and for no $y\in V(H)$, there exists an $M$-exposed path from $x$ to $y$. \end{lemma} \begin{proof} Suppose this lemma fails, and let $P$ be an $M$-exposed path from $x\in V(G)\setminus V(H)$ to $y\in V(H)$. Then, $y$ is covered by $M$, because, otherwise $M\triangle E(P)$ would be a bigger matching of $G$ than $M$. Hence, there is a vertex $y'\in V(H)$ to which $y$ is matched by $M$. Then, $P+yy'$ is an $M$-forwarding path from $y'$ to $x$, and therefore, from Lemma~\ref{lem:forwarding2allowed}, $y$ and $x$ are contained in the same factor-component, which is a contradiction. \qed \end{proof} \begin{lemma} \label{lem:preserve} Let $G$ be a graph and $M$ be a maximum matching of $G$. Let $H\in\comp{G}$. Then, $M_{V(G)\setminus V(H)}$ is a maximum matching of $G/H$. \end{lemma} \begin{proof} Suppose this lemma fails, that is, $M_{V(G)\setminus V(H)}$ is not a maximum matching of $G/H$. Then, $G/H$ has an $M$-exposed path $P$ in which one end is the contracted vertex $h$ that corresponds to $H$ and the other end is a vertex exposed by $M$. In $G$, $P$ forms an $M$-exposed path between a vertex not in $V(H)$ to a vertex of $H$. This contradicts Lemma~\ref{lem:nopath2comp}. \qed \end{proof} The next lemma associates the set of strict upper bounds with the special subgraph $C$ depicted in Proposition~\ref{prop:rootblossom}. \begin{lemma}\label{lem:vup2dcomp} Let $G$ be a graph and $M$ be a maximum matching of $G$, and let $G_0\in\mathcal{G}(G)$. Let $G' := G/G_0$, and let $g_0$ be the contracted vertex that corresponds to $G_0$. Then, there exists a connected component $C$ of $G'[D(G')]$ with $g_0\in V(C)$ such that $V(C)\setminus \{g_0\}$ is equal to $\vparup{G}{G_0}$. \end{lemma} \begin{proof} Let $M' := M \setminus E(G_0)$. Define $\mathcal{X}\subseteq 2^{V(G)}$ as follows: $X\subseteq V(G)$ is a member of $\mathcal{X}$ if $V(G_0)\subseteq X$ holds, $G[X]/G_0$ is factor-critical, and $M_X$ is a perfect matching of $G[X]$. Additionally, define $\mathcal{X}'\subseteq 2^{V(G')}$ as follows: $X'\subseteq V(G')$ is a member of $\mathcal{X}'$ if $g_0\in X'$ holds, $G'[X']$ is factor-critical, and $M'_{X'}$ is a near-perfect matching of $G'[X']$, exposing $g_0$. It is easy to see that for $X\subseteq V(G)$ and $X'\subseteq V(G')$ with $X \setminus V(G_0) = X'\setminus \{g_0\}$, $X\in \mathcal{X}$ holds if and only if $X'\in \mathcal{X}'$ holds. According to Lemma~\ref{lem:preserve}, $M'$ is a maximum matching of $G'$, which exposes $g_0$. Hence, from Proposition~\ref{prop:rootblossom}, there exists a connected component $C$ of $G'[D(G')]$ with $g_0\in V(C)$, and $V(C)$ is equal to the maximum member of $\mathcal{X}'$. Accordingly, $\mathcal{X}$ has the maximum member $X_0$, with $X_0\setminus V(G_0) = V(C)\setminus \{g_0\}$. In the following, we prove $X_0 = \vparupstar{G}{G_0}$. From Proposition~\ref{prop:rootblossom}, with respect to any maximum matching of $G'$ that exposes $g_0$, $V(C)$ is closed. This implies that $X_0$ is a separating set of $G$. Accordingly, $X_0$ is a critical-inducing set for $G_0$, and therefore, from Theorem~\ref{thm:maximumup}, $X_0\subseteq \vparupstar{G}{G_0}$ holds. By contrast, $X_0\supseteq \vparupstar{G}{G_0}$ holds, because $\vparupstar{G}{G_0} \in \mathcal{X}$ holds. Hence, we obtain $X_0 = \vparupstar{G}{G_0}$, and therefore, $\vparup{G}{G_0} = V(C)\setminus \{g_0\}$. \qed \end{proof} The next statement immediately follows from Proposition~\ref{prop:rootblossom} and Lemma~\ref{lem:vup2dcomp}. \begin{lemma}\label{lem:vupalg} Given a graph $G$, a maximum matching $M$ of $G$, and $H\in\comp{G}$, $\vup{H}$ can be computed in $O(m)$ time. \end{lemma} The next theorem shows how to compute the poset of the basilica order using Lemma~\ref{lem:vupalg}. \begin{theorem}\label{thm:orderalg} Given a graph $G$, one of its maximum matchings $M$, and $\mathcal{G}(G)$, we can compute the poset $(\mathcal{G}(G), \yield)$ in $O(nm)$ time. \end{theorem} \begin{proof} It is sufficient to list all the strict upper bounds for each factor-component of $G$ by the following procedure. \begin{algorithmic}[1] \STATE Initialize $f: \mathcal{G}(G) \rightarrow 2^{\mathcal{G}(G)}$ by $f(H):= \emptyset$ for each $H\in\mathcal{G}(G)$; \FORALL{$H\in\mathcal{G}(G)$} \STATE compute $\vup{H}$ according to Lemma~\ref{lem:vupalg}; \FORALL{$x\in \vup{H}$} \STATE let $I\in\mathcal{G}(G)$ be such that $x\in V(I)$; \STATE $f(H) := f(H) \cup \{I\}$. \ENDFOR \ENDFOR \end{algorithmic} The correctness of the algorithm is obvious. For each $H\in \mathcal{G}(G)$, the above procedure costs $O(m)$ time; therefore, the entire computation costs $O(nm)$ time. \qed \end{proof} \subsection{Concluding Algorithms} \label{sec:alg:conclusion} From Theorems~\ref{thm:matchingalg}, \ref{dacalg}, and \ref{thm:compalg}, a maximum matching, the Gallai-Edmonds family, and the set of factor-components can be computed in $O(nm)$ time in total. Therefore, from Theorems~\ref{thm:partalg} and \ref{thm:orderalg}, we obtain an $O(nm)$ time algorithm for computing the basilica decomposition. \begin{theorem} Given a graph $G$, the basilica order $\yield$ over $\comp{G}$ and the generalized Kotzig-Lov\'asz decomposition can be computed in $O(nm)$ time. \end{theorem} \section{Conclusion} \label{sec:conclusion} We have introduced a new canonical decomposition, the {\em basilica decomposition}. The central results that support our new theory are the {\em basilica order}, a canonical partial order over the set of factor-components (Theorem~\ref{thm:order}), the generalized Kotzig-Lov\'asz decomposition (Theorem~\ref{thm:generalizedcanonicalpartition}), and the structure described by a relationship between these two, which unites them into a canonical decomposition (Theorem~\ref{thm:base}). We have also presented an $O(nm)$ time algorithm for computing the basilica decomposition. As canonical decompositions have formed the theoretical foundation of matching theory, we believe that the results in this paper will be beneficial to this field, and, by extension, to the entire field of combinatorics. We have already obtained some important results using the ideas in this paper. \begin{itemize} \item The structure of {\em barriers}, which are a classically important notion that corresponds to the dual of maximum matchings, has been revealed~\cite{DBLP:conf/cocoa/Kita13, kita2012canonical}. \item A new proof of Lov\'asz's {\em cathedral theorem}, which is an inductive characterization of {\em saturated graphs}, has been obtained~\cite{kita2014alternative}. \item A purely graph theoretic proof of the celebrated {\em tight cut lemma}, which has contributed to almost all the results about the perfect matching polytope since 1982, has been obtained~\cite{kita2015graph}. \end{itemize} \begin{acnt} An early version of this work was presented in the papers~\cite{kita2012partially, DBLP:conf/isaac/Kita12}. The author would like to express gratitude to Richard Hoshino and Yusuke Kobayashi for carefully reading an early version of this paper and for giving useful comments on the writing and presentation. \end{acnt} \bibliographystyle{../splncs03.bst}
2004.00083
\section{Introduction} We consider difference-of-convex (DC) problems \[\tag{P}\label{eq:P} \minimize_{s\in\R^p}\varphi(s)\coloneqqg}%{\mathchoice{G}{G}{{\ss G}}{G}}(s)-h}%{\mathchoice{G}{G}{{\ss G}}{G}}(s), \] where \(\func{g}%{\mathchoice{G}{G}{{\ss G}}{G}},h}%{\mathchoice{G}{G}{{\ss G}}{G}}}{\R^p}{\R\cup\set\infty}\) are proper, convex, lsc functions (with the convention \(\infty-\infty=\infty\)). DC problems cover a very broad spectrum of applications; a well detailed theoretical and algorithmic analysis is presented in \cite{tao1997convex}, where the nowadays textbook algorithm DCA is presented that interleaves subgradient evaluations \(v\in\partial{h}%{\mathchoice{G}{G}{{\ss G}}{G}}}(u)\), \(u^+\in\partial\conj{g}%{\mathchoice{G}{G}{{\ss G}}{G}}}(v)\), aiming at finding a \emph{stationary} point \(u\), that is, a point satisfying \begin{equation}\label{eq:stationary} \partialg}%{\mathchoice{G}{G}{{\ss G}}{G}}(u)\cap\partialh}%{\mathchoice{G}{G}{{\ss G}}{G}}(u)\neq\emptyset, \end{equation} a relaxed version of the necessary condition \(\partialh}%{\mathchoice{G}{G}{{\ss G}}{G}}(u)\subseteq\partialg}%{\mathchoice{G}{G}{{\ss G}}{G}}(u)\) \cite{hiriarturruty1989convex}. As noted in \cite{an2017convergence}, proximal \emph{sub}gradient iterations are effective even in handling a nonsmooth nonconvex \(g}%{\mathchoice{G}{G}{{\ss G}}{G}}\) and a nonsmooth concave \(-h}%{\mathchoice{G}{G}{{\ss G}}{G}}\). Alternative approaches use the identity \(-f}%{{\mathchoice{F}{F}{{\ss F}}{F}}(x)=\inf_y\set{\conj{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}(y)-\innprod xy}\) involving the convex conjugate \(\conj{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}\) to include an additional convex function \(f}%{{\mathchoice{F}{F}{{\ss F}}{F}}\) as \begin{equation}\label{eq:P3} \minimize_{x\in\R^n}g}%{\mathchoice{G}{G}{{\ss G}}{G}}(x)-h}%{\mathchoice{G}{G}{{\ss G}}{G}}(x)-f}%{{\mathchoice{F}{F}{{\ss F}}{F}}(x), \end{equation} and then recast the problem as \begin{equation}\label{eq:P3DC} \minimize_{x,y\in\R^n}{ \Phi(x,y) {}\coloneqq{} \overbracket{g}%{\mathchoice{G}{G}{{\ss G}}{G}}(x)+\conj{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}(y)}^{G(x,y)} {}-{} \bigl( \overbracket{\vphantom{\conj{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}}h}%{\mathchoice{G}{G}{{\ss G}}{G}}(x)+\innprod xy}^{\H(x,y)} \bigr) }. \end{equation} By adding and substracting suitably large quadratics, one can again obtain a decoupled DC formulation, showing that \eqref{eq:P} is in fact as general as \eqref{eq:P3}. When function \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}\) is smooth (differentiable with Lipschitz gradient), a cornerstone algorithm for the ``convex\(+\)smooth'' formulation \eqref{eq:P3DC} is forward-backward splitting (FBS), amounting to gradient evaluations of the smooth component \( -h}%{\mathchoice{G}{G}{{\ss G}}{G}}(s)-\innprod st \) followed by proximal operations (possibly in parallel) on \(g}%{\mathchoice{G}{G}{{\ss G}}{G}}\) and \(\conj{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}\). A detailed overview on DC algorithms is beyond the scope of this paper; the interested reader is referred to the exhaustive surveys in \cite{tao1997convex,horst1999dc,bacak2011difference} and references therein. Most related to our approach, \cite{banert2019general} analyzes a Gauss-Seidel-type FBS in the spirit of the PALM algorithm \cite{bolte2014proximal}, and \cite{liu2017further} exploits the interpretation of FBS as a gradient-type algorithm on the \emph{forward-backward envelope} (FBE) \cite{patrinos2013proximal,stella2017forward} to develop quasi-Newton methods for the nonsmooth and nonconvex problem \eqref{eq:P3}. The gradient interpretation of splitting schemes originated in \cite{rockafellar1976monotone} with the proximal point algorithm and has recently been extended to several other schemes \cite{patrinos2013proximal,patrinos2014douglas,stella2018newton,giselsson2018envelope}. In this work we undertake a converse direction: first we design a smooth surrogate of the nonsmooth DC function in \eqref{eq:P}, and then derive a novel splitting algorithm from its gradient steps. Classical methods stemming from smooth minimization such as L-BFGS can conveniently be implemented, resulting in a method inherently robust against ill conditioning. \begin{algorithm}[t] \caption{Two-prox algorithm for the DC problem \eqref{eq:P}}% \label{alg:P}% Select \(\gamma>0\) and \(0<\lambda<2\), and starting from \(s\in\R^p\), repeat \begin{equation}\label{eq:alg} \begin{cases}[l] \left. \begin{array}{c @{{}={}} l} \fillwidthof[c]{s^+}{u} & \prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s) \\ \fillwidthof[c]{s^+}{v} & \prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s) \end{array} ~~\right] ~~ \text{\small(in parallel)} \\[8pt] s^+ = s+\lambda(v-u) \end{cases} \end{equation} {\small {\bf Note:} \(s^+=s-\lambda\gamma\nabla\env(s)\), where \(\env=g}%{\mathchoice{G}{G}{{\ss G}}{G}}^\gamma-h}%{\mathchoice{G}{G}{{\ss G}}{G}}^\gamma\)% }% \end{algorithm} \begin{algorithm}[t] \caption{Three-prox algorithm for the DC problem \eqref{eq:P3} \label{alg:P3}% Select \(0<\gamma<1<\delta\), \(0<\lambda<2(1-\gamma)\), and \(0<\mu<2(1-\delta^{-1})\), and starting from \(s,t\in\R^p\), repeat \begin{equation}\label{eq:alg3} \begin{cases}[l@{~~}l] \left. \begin{array}{c @{{}={}} l} \fillwidthof[c]{s^+}{u} & \prox_{\frac{\gamma\delta}{\delta-\gamma}h}%{\mathchoice{G}{G}{{\ss G}}{G}}}\bigl( \frac{\delta s-\gamma t}{\delta-\gamma} \bigr) \\ \fillwidthof[c]{s^+}{v} & \prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s) \\ \fillwidthof[c]{s^+}{z} & \prox_{\deltaf}%{{\mathchoice{F}{F}{{\ss F}}{F}}}(t) \end{array} ~~\right] & \text{\small(in parallel)} \\[15pt] \left. \begin{array}{c @{{}={}} l} s^+ & s+\lambda(v-u) \\ t^+ & \mathrlap{t+\mu(u-z)} \hphantom{\prox_{\frac{\gamma\delta}{\delta-\gamma}h}%{\mathchoice{G}{G}{{\ss G}}{G}}}\bigl( \frac{\delta s-\gamma t}{\delta-\gamma} \bigr)} \end{array} ~~\right] & \text{\small(in parallel)} \end{cases} \end{equation} {\small {\bf Note:} \begin{tabular}[t]{@{}l@{}} \( \binom{s^+}{t^+} {}={} \binom st {}-{} \binom{\gamma\lambda\I~~\phantom{\delta\mu\I}}{\phantom{\gamma\lambda\I}~~\delta\mu\I} \nabla\Psi(s,t) \),~~ where \\ \(\displaystyle \Psi(s,t) {}={} g}%{\mathchoice{G}{G}{{\ss G}}{G}}^\gamma(s) {}-{} f}%{{\mathchoice{F}{F}{{\ss F}}{F}}^\delta(t) {}-{} h}%{\mathchoice{G}{G}{{\ss G}}{G}}^{\frac{\gamma\delta}{\delta-\gamma}}\bigl(\tfrac{\delta s-\gamma t}{\delta-\gamma}\bigr) {}+{} \tfrac{1}{2(\delta-\gamma)}\|s-t\|^2 \) \end{tabular} }% \end{algorithm} \subsection{Contributions} \paragraph{Fully parallelizable splitting schemes} In this paper we propose the novel (sub)gradient-free proximal \cref{alg:P} for the DC problem \eqref{eq:P}, and its fully parallelizable variant when applied to \eqref{eq:P3} synopsized in \cref{alg:P3} (see \cref{sec:Preliminaries} for the notation therein adopted). Our approach can be considered complementary to that in \cite{liu2017further}. First, we propose a novel smooth DC envelope function (DCE) that shares minimizers and stationary points with the original nonsmooth DC function \(\varphi\) in \eqref{eq:P}, similarly to the FBE in \cite{liu2017further}. Then, we show that a classical gradient descent on the DCE results in a novel (sub)gradient-free proximal algorithm that is particularly amenable to parallel implementations. In fact, even when specialized to problem \eqref{eq:P3} it involves operations on the three functions that can be done in parallel, differently from FBS-based approaches that prescribe serial (sub)gradient and proximal evaluations. Due to the complications of computing proximal steps in arbitrary metrics, this flexibility comes at the price of not being able to efficiently handle the composition of \(f}%{{\mathchoice{F}{F}{{\ss F}}{F}}\) in \eqref{eq:P3} with arbitrary linear operators, which is instead possible with FBS-based approaches such as \cite{liu2017further,an2017convergence,banert2019general}. \paragraph{Novel smooth DC reformulation} Thanks to the smooth gradient descent interpretation \emph{it is possible to design classical linesearch strategies} to include directions stemming for instance from quasi-Newton methods, \emph{without complicating the first-order algorithmic oracle}. In fact, differently from similar FBE-based quasi-Newton techniques in \cite{liu2017further,patrinos2013proximal,stella2017forward}, no second-order derivatives are needed here and we actually allow for fully nonsmooth formulations. Moreover, being the difference of convex and Lipschitz-differentiable functions, the proposed envelope reformulation allows for the extension of the boosted DCA \cite{artacho2018accelerating} to arbitrary DC problems. \paragraph{A convexity-preserving nonlinear scaling of the FBE} When function \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}\) in \eqref{eq:P} is smooth, we show that the DCE coincides with the FBE \cite{patrinos2013proximal,stella2017forward,themelis2018forward} after a nonlinear scaling. This change of variable overcomes some limitations of the FBE, such as preserving convexity when problem \eqref{eq:P} is convex and being (Lipschitz) differentiable without additional requirements on function \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}\). \subsection{Paper organization} The paper is organized as follows. \Cref{sec:Preliminaries} lists the adopted notational conventions and some known facts needed in the sequel. \Cref{sec:env} introduces the DCE, a new envelope function for problem \eqref{eq:P}, and provides some of its basic properties and its connections with the FBE. \Cref{sec:Algorithm} shows that a classical gradient method on the DCE results in \cref{alg:P}, and establishes convergence results as a simple byproduct. \Cref{alg:P3} is shown to be a \emph{scaled} version of the parent \cref{alg:P}; for the sake of simplicity of presentation, some technicalities needed for this derivation are confined to this section. \Cref{sec:Simulations} shows the effect of L-BFGS acceleration on the proposed method on a sparse principal component analysis problem. \Cref{sec:Conclusions} concludes the paper. \section{Notation and known facts}\label{sec:Preliminaries} The set of symmetric matrices in \(\R^p\) is denoted as \(\symm(\R^p)\); the subsets of those which are positive definite is denoted as \(\symm_{++}(\R^p)\). Any \(M\in\symm_{++}(\R^p)\) induces the scalar product \((x,y)\mapsto \trans xMy\) on \(\R^p\), with corresponding norm \(\|x\|_M=\sqrt{\trans xMx}\). When \(M=\I\), the identity matrix of suitable size, we will simply write \(\|x\|\). \(\id\) is the identity function on a suitable space. The subdifferential of a proper, lsc, convex function \(\func{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}{\R^p}{\Rinf\coloneqq\R\cup\set\infty}\) is \[ \partialf}%{\mathchoice{G}{G}{{\ss G}}{G}}(x) {}={} \set{v\in\R^p}[ f}%{\mathchoice{G}{G}{{\ss G}}{G}}(z) {}\geq{} f}%{\mathchoice{G}{G}{{\ss G}}{G}}(x) {}+{} \innprod{v}{z-x},~ \forall z ]. \] The \DEF{effective domain} of \(f}%{\mathchoice{G}{G}{{\ss G}}{G}}\) is \(\domf}%{\mathchoice{G}{G}{{\ss G}}{G}}=\set{x\in\R^p}[f}%{\mathchoice{G}{G}{{\ss G}}{G}}(x)<\infty]\), while \( \conj{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}(y) \coloneqq{} \sup_{x\in\R^p}\set{\innprod xy - f}%{\mathchoice{G}{G}{{\ss G}}{G}}(x)} \) denotes the \DEF{Fenchel conjugate} of \(f}%{{\mathchoice{F}{F}{{\ss F}}{F}}\), which is also proper, closed and convex. Properties of conjugate functions are well described for example in \cite{rockafellar1970convex,hiriarturruty2012fundamentals,bauschke2017convex}. Among these we recall that \begin{equation}\label{eq:ConjSubgr} y\in\partial{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}(x) {}\Leftrightarrow{} \innprod xy {}={} f}%{\mathchoice{G}{G}{{\ss G}}{G}}(x)+\conj{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}(y) {}\Leftrightarrow{} x\in\partial\conj{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}(y). \end{equation} The \DEF{proximal mapping} of \(f}%{\mathchoice{G}{G}{{\ss G}}{G}}\) with stepsize \(\gamma>0\) is \begin{align}\label{eq:prox} \prox_{\gammaf}%{\mathchoice{G}{G}{{\ss G}}{G}}}(x) {}\coloneqq{} & \argmin_{w\in\R^p}{ \set{f}%{\mathchoice{G}{G}{{\ss G}}{G}}(w)+\tfrac{1}{2\gamma}\|w-x\|^2} }, \shortintertext{% while the value function of the above optimization problem defines the \DEF{Moreau envelope} } f}%{\mathchoice{G}{G}{{\ss G}}{G}}^\gamma(x) {}\coloneqq{} & \inf_{w\in\R^p}\set{f}%{\mathchoice{G}{G}{{\ss G}}{G}}(w)+\tfrac{1}{2\gamma}\|w-x\|^2}. \end{align} Properties of the Moreau envelope and the proximal mapping are well documented in the literature \cite{bauschke2017convex, combettes2005signal,combettes2011proximal}, some of which are summarized next. \begin{fact}[Proximal properties of convex functions]\label{thm:proxg}% Let \(f}%{\mathchoice{G}{G}{{\ss G}}{G}}\) be proper, convex, and lsc. Then, for all \(\gamma>0\) and \(s,s'\in\R^p\) \begin{enumerate} \item\label{thm:proxgEquiv} \(\prox_{\gammaf}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s)\) is the unique point \(x\) such that \( s\in x+\gamma\partialf}%{\mathchoice{G}{G}{{\ss G}}{G}}(x) \). \item\label{thm:proxgInnprod} \( \|x-x'\|^2 {}\leq{} \innprod{x-x'}{s-s'} {}\leq{} \|s-s'\|^2 \), where \(x=\prox_{\gammaf}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s)\) and \(x'=\prox_{\gammaf}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s')\). \item\label{thm:proxgBounds} for \(x=\prox_{\gammaf}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s)\) and \(w\in\R^p\) it holds that \( f}%{\mathchoice{G}{G}{{\ss G}}{G}}^\gamma(s) {}\leq{} f}%{\mathchoice{G}{G}{{\ss G}}{G}}(w) {}+{} \tfrac{1}{2\gamma}\|w-s\|^2 {}-{} \tfrac{1}{2\gamma}\|x-s\|^2 \). \item\label{thm:MoreauC1} the Moreau envelope \(f}%{\mathchoice{G}{G}{{\ss G}}{G}}^\gamma\) is convex and has \(\frac1\gamma\)-Lipschitz-continuous gradient \( \nablaf}%{\mathchoice{G}{G}{{\ss G}}{G}}^\gamma {}={} \frac1\gamma\bigl(\id-\prox_{\gammaf}%{\mathchoice{G}{G}{{\ss G}}{G}}}\bigr) \). \end{enumerate} \end{fact} \section{The DC envelope}\label{sec:env} In this section we introduce a smooth DC reformulation of \eqref{eq:P} that enables us to cast the nonsmooth and possibly extended-real valued DC problem into the unconstrained minimization of the DCE, a function with Lipschitz-continuous gradient. A classical gradient descent algorithm on this reformulation will then be shown in \Cref{sec:Algorithm} to lead to the proposed \cref{alg:P,alg:P3}. In this sense, the DCE serves a similar role as the Moreau envelope for the proximal point algorithm \cite{rockafellar1976monotone}, and the FBE and Douglas-Rachford envelope respectively for FBS and the Douglas-Rachford splitting (DRS) \cite{stella2017forward,patrinos2014douglas}. We begin by formalizing the DC setting of problem \eqref{eq:P} dealt in the paper with the following list of requirements. \begin{ass}\label{ass:P}% The following hold in problem \eqref{eq:P}: \begin{enumeratass} \item \(\func{g}%{\mathchoice{G}{G}{{\ss G}}{G}},h}%{\mathchoice{G}{G}{{\ss G}}{G}}}{\R^p}{\Rinf}\) are proper, convex, and lsc; \item\label{ass:phi} \(\varphi\) is lower bounded (with the convention \(\infty-\infty=\infty\)). \end{enumeratass} \end{ass} \begin{defin}[DC envelope]\label{defin:env} Suppose that \cref{ass:P} holds. Relative to problem \eqref{eq:P}, the DC envelope (DCE) with stepsize \(\gamma>0\) is \[ \env(s) {}\coloneqq{} g}%{\mathchoice{G}{G}{{\ss G}}{G}}^\gamma(s)-h}%{\mathchoice{G}{G}{{\ss G}}{G}}^\gamma(s). \] \end{defin} Before showing that the DCE \(\env\) satisfies the anticipated smoothness properties and is tightly connected with solutions of problem \eqref{eq:P}, we provide a simple characterization of stationary points in terms of the proximal mappings of the functions involved in the DC formulation. This will then be used to connect points that are stationary in the sense of \eqref{eq:stationary} for \eqref{eq:P} with points that are stationary in the classical sense for \(\env\). \begin{lem}[Optimality conditions]\label{thm:optimality}% Suppose that \cref{ass:P} holds. Then, any of the following is equivalent to stationarity at \(u\) in the sense of \eqref{eq:stationary}: \begin{enumerateq} \item\label{thm:prox=} there exist \(\gamma>0\) and \(s\in\R^p\) such that \(u=\prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s)=\prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s)\); \item\label{thm:prox=all} for all \(\gamma>0\) there exists \(s\in\R^p\) such that \(u=\prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s)=\prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s)\). \end{enumerateq} \begin{proof} If \(u\) is stationary, then for every \(\gamma>0\) and \(\xi\in\partialg}%{\mathchoice{G}{G}{{\ss G}}{G}}(u)\cap\partialh}%{\mathchoice{G}{G}{{\ss G}}{G}}(u)\neq\emptyset\) it follows from \cref{thm:proxgEquiv} that \( u=\prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s)=\prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s) \) for \(s=u+\gamma\xi\), proving \ref{thm:prox=all} and thus \ref{thm:prox=}. Conversely, if \ref{thm:prox=} holds then \cref{thm:proxgEquiv} again implies \( \frac{s-u}{\gamma}\in\partial g(u) \) and \( \frac{s-u}{\gamma}\in\partial h(u) \), proving that \(u\) is stationary. \end{proof} \end{lem} \begin{lem}[Basic properties of the DCE]\label{thm:env} Let \cref{ass:P} hold, and for notational conciseness given \(s\in\R^p\) let \(u\coloneqq\prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s)\) and \(v\coloneqq\prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s)\). The following hold: \begin{enumerate} \item\label{thm:smooth} \(\env\) is \(\tfrac1\gamma\)-smooth with \( \nabla\env=\tfrac1\gamma\bigl(\prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}-\prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}\bigr) \); \item\label{thm:stationary} \(\nabla\env(s)=0\) iff \(u\) is stationary (cf. \eqref{eq:stationary}); \begin{comment} \item\label{thm:innprod} \( -\tfrac1\gamma \|s-s'\|^2 {}\leq{} \innprod{\nabla\env(s)-\nabla\env(s')}{s-s'} {}\leq{} \tfrac1\gamma \|s-s'\|^2 \); \( \tfrac{\sigma_{-h}%{\mathchoice{G}{G}{{\ss G}}{G}}}}{1-\gamma\sigma_{-h}%{\mathchoice{G}{G}{{\ss G}}{G}}}} \|s-s'\|^2 {}\leq{} \innprod{\nabla\env(s)-\nabla\env(s')}{s-s'} {}\leq{} \tfrac{1}{\gamma(1+\gamma\sigma_{h}%{\mathchoice{G}{G}{{\ss G}}{G}}})} \|s-s'\|^2 \); \item\label{thm:sandwich} \( \varphi(v) {}+{} \tfrac{1}{2\gamma}\|v-u\|^2 {}\leq{} \env(s) {}\leq{} \varphi(u) {}-{} \tfrac{1+\gamma\sigma_{h}%{\mathchoice{G}{G}{{\ss G}}{G}}}}{2\gamma}\|v-u\|^2 \); \end{comment} \item\label{thm:sandwich} \( \varphi(v) {}+{} \tfrac{1}{2\gamma}\|v-u\|^2 {}\leq{} \env(s) {}\leq{} \varphi(u) {}-{} \tfrac{1}{2\gamma}\|v-u\|^2 \); \item\label{thm:min} \( \argmin\varphi {}={} \prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}(S_\star) {}={} \prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(S_\star) \) and \(\inf\varphi=\inf\env\) for \(S_\star=\argmin\env\). \end{enumerate} \begin{proof}~ \begin{proofitemize} \item\ref{thm:smooth}~ The expression of the gradient follows from \cref{thm:MoreauC1}. The bounds in \cref{thm:proxgInnprod} imply that \begin{equation}\label{eq:smoothbounds} \left|\innprod{\nabla\env(s)-\nabla\env(s')}{s-s'}\right| {}\leq{} \tfrac1\gamma \|s-s'\|^2, \end{equation} proving that \(\nabla\env\) is \(\gamma^{-1}\)-Lipschitz continuous. \item\ref{thm:stationary} Follows from assertion \ref{thm:smooth} and \cref{thm:optimality}. \item\ref{thm:sandwich} Follows by applying the proximal inequalities of \cref{thm:proxgBounds} with \(w=u\) and \(w=v\). \item\ref{thm:min} Follows from assertion \ref{thm:sandwich}, \cref{thm:optimality}, and the fact that global minimizers for \(\varphi\) are stationary. \qedhere \end{proofitemize} \end{proof} \end{lem} \subsection{Connections with the forward-backward envelope} As it will be detailed in \Cref{sec:hypo}, considering difference of hypo\-convex functions in problem \eqref{eq:P} leads to virtually no generalization. A more interesting scenario occurs when both \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}\) and \(-h}%{\mathchoice{G}{G}{{\ss G}}{G}}\) are hypoconvex functions, which amounts to \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}\) being \(L_{h}%{\mathchoice{G}{G}{{\ss G}}{G}}}\)-smooth (differentiable with \(L_{h}%{\mathchoice{G}{G}{{\ss G}}{G}}}\)-Lipschitz gradient). In order to elaborate this property we first need to specialize \cref{thm:proxf} to smooth functions. \begin{lem}[Proximal properties of smooth functions]\label{thm:proxf}% Suppose that \(\func{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}{\R^p}{\R}\) is \(L_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}\)-smooth. Then, there exist \( \sigma_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}},\sigma_{-f}%{\mathchoice{G}{G}{{\ss G}}{G}}} {}\in{} [-L_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}},L_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}] \) with \( L_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}} {}={} \max\set{ |\sigma_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}|,|\sigma_{-f}%{\mathchoice{G}{G}{{\ss G}}{G}}}| } \) such that \(f}%{\mathchoice{G}{G}{{\ss G}}{G}}-\tfrac{\sigma_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}}{2}\|{}\cdot{}\|^2\) and \(-f}%{\mathchoice{G}{G}{{\ss G}}{G}}-\tfrac{\sigma_{-f}%{\mathchoice{G}{G}{{\ss G}}{G}}}}{2}\|{}\cdot{}\|^2\) are convex functions. Then, for all \(\gamma<\nicefrac{1}{[\sigma_{-f}%{\mathchoice{G}{G}{{\ss G}}{G}}}]_-}\) (with the convention \(\nicefrac10=\infty\)) and \(s,s'\in\R^p\) \begin{enumerate} \item\label{thm:proxfEquiv} \(\prox_{-\gammaf}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s)\) is the unique \(u\) such that \( s=u-\gamma\nablaf}%{\mathchoice{G}{G}{{\ss G}}{G}}(u) \);% \item\label{thm:proxfInnprod} \( \tfrac{1}{1-\gamma\sigma_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}} \|s-s'\|^2 {}\leq{} \innprod{u-u'}{s-s'} {}\leq{} \tfrac{1}{1+\gamma\sigma_{-f}%{\mathchoice{G}{G}{{\ss G}}{G}}}} \|s-s'\|^2 \), where \(u=\prox_{-\gammaf}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s)\) and \(u'=\prox_{-\gammaf}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s')\); \item \((-f}%{\mathchoice{G}{G}{{\ss G}}{G}})^\gamma\) is differentiable with \(\nabla(-f}%{\mathchoice{G}{G}{{\ss G}}{G}})^\gamma=\frac{\id-\prox_{-\gammaf}%{\mathchoice{G}{G}{{\ss G}}{G}}}}{\gamma}\). \end{enumerate} \begin{proof} The claim on the existence of \(\sigma_{\pmf}%{\mathchoice{G}{G}{{\ss G}}{G}}}\) comes from the fact that \(f}%{\mathchoice{G}{G}{{\ss G}}{G}}\) is \(L_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}\)-smooth iff \(\tfrac{L_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}}{2}\|{}\cdot{}\|^2\pmf}%{\mathchoice{G}{G}{{\ss G}}{G}}\) are convex functions, and that \(f}%{\mathchoice{G}{G}{{\ss G}}{G}}\) is \(L_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}\)-smooth iff so is \(-f}%{\mathchoice{G}{G}{{\ss G}}{G}}\). All other claims then follow from \cref{thm:proxg} applied to the convex function \(\tildef}%{\mathchoice{G}{G}{{\ss G}}{G}}=-f}%{\mathchoice{G}{G}{{\ss G}}{G}}-\tfrac{\sigma_{-f}%{\mathchoice{G}{G}{{\ss G}}{G}}}}{2}\|{}\cdot{}\|^2\), in light of the identity \( \prox_{\gamma\tildef}%{\mathchoice{G}{G}{{\ss G}}{G}}} {}={} \prox_{-\frac{\gamma}{1-\gamma\sigma_{-f}%{\mathchoice{G}{G}{{\ss G}}{G}}}}f}%{\mathchoice{G}{G}{{\ss G}}{G}}} {}\circ{} \tfrac{\id}{1-\gamma\sigma_{-f}%{\mathchoice{G}{G}{{\ss G}}{G}}}} \) \cite[Prop. 24.8(i)]{bauschke2017convex}. \end{proof} \end{lem} In the remainder of this subsection, suppose that \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}\) is smooth. Denoting \(f}%{{\mathchoice{F}{F}{{\ss F}}{F}}\coloneqq-h}%{\mathchoice{G}{G}{{\ss G}}{G}}\), problem \eqref{eq:P} reduces to \begin{equation}\label{eq:FB:P} \minimize_{u\in\R^n}f}%{{\mathchoice{F}{F}{{\ss F}}{F}}(u)+g}%{\mathchoice{G}{G}{{\ss G}}{G}}(u)=g}%{\mathchoice{G}{G}{{\ss G}}{G}}(u)-(-f}%{{\mathchoice{F}{F}{{\ss F}}{F}})(u) \end{equation} with \(g}%{\mathchoice{G}{G}{{\ss G}}{G}}\) convex and \(f}%{{\mathchoice{F}{F}{{\ss F}}{F}}\) smooth. A textbook algorithm for addressing such composite minimization problems is FBS, which interleaves proximal and gradient operations as \begin{equation} u^+=\FB u. \end{equation} By observing that \(s=\Fw u\) iff \(u=\prox_{-\gammaf}%{{\mathchoice{F}{F}{{\ss F}}{F}}}(s)\) for \(\gamma<\nicefrac{1}{L_f}\), one obtains the following curious connection among \(\env\) and the forward-backward envelope \cite[Eq. (2.3)]{stella2017forward} \begin{equation}\label{eq:FBE} \cost_\gamma^{\text{\sc fb}}(u) {}={} f}%{{\mathchoice{F}{F}{{\ss F}}{F}}(u) {}-{} \tfrac\gamma2\|\nablaf}%{{\mathchoice{F}{F}{{\ss F}}{F}}(u)\|^2 {}+{} g}%{\mathchoice{G}{G}{{\ss G}}{G}}^\gamma(\Fw u). \end{equation} \begin{lem} \renewcommand{\h}{-f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}% In problem \eqref{eq:FB:P}, suppose that \(f}%{{\mathchoice{F}{F}{{\ss F}}{F}}\) is \(L_{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}\)-smooth and \(g}%{\mathchoice{G}{G}{{\ss G}}{G}}\) is proper, convex, and lsc. Then, for every \(\gamma<\nicefrac{1}{L_{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}}\) \[ \cost_\gamma^{\text{\sc fb}} {}={} \env\circ(\Fw{}) ~\text{and}~ \env {}={} \cost_\gamma^{\text{\sc fb}}\circ\prox_{-\gammaf}%{{\mathchoice{F}{F}{{\ss F}}{F}}}. \] Moreover, \(\env\) is \(\frac{1-\gamma L_{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}}{\gamma}\)-smooth, and if \(f}%{{\mathchoice{F}{F}{{\ss F}}{F}}\) is additionally convex then so is \(\env\). \begin{proof} Let \(u\in\R^p\) and \(\gamma\in(0,\nicefrac{1}{L_{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}})\) be fixed, and for notational conciseness let \(u=\prox_{-\gammaf}%{{\mathchoice{F}{F}{{\ss F}}{F}}}(s)\). Then, \(s=\Fw u\) and \((-f}%{{\mathchoice{F}{F}{{\ss F}}{F}})^\gamma(s)=-f}%{{\mathchoice{F}{F}{{\ss F}}{F}}(u)+\tfrac{1}{2\gamma}\|u-s\|^2\), hence \begin{align*} \env(s) {}={} & g}%{\mathchoice{G}{G}{{\ss G}}{G}}^\gamma(\Fw u) {}+{} f}%{{\mathchoice{F}{F}{{\ss F}}{F}}(u)-\tfrac{1}{2\gamma}\|u-s\|^2 \\ {}={} & f}%{{\mathchoice{F}{F}{{\ss F}}{F}}(u) {}-{} \tfrac\gamma2\|\nablaf}%{{\mathchoice{F}{F}{{\ss F}}{F}}(u)\|^2 {}+{} g}%{\mathchoice{G}{G}{{\ss G}}{G}}^\gamma(\Fw u), \end{align*} which is exactly \(\cost_\gamma^{\text{\sc fb}}(u)\), cf. \eqref{eq:FBE}. By using \cref{thm:proxfInnprod} for \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}=-f}%{{\mathchoice{F}{F}{{\ss F}}{F}}\), the bounds in \eqref{eq:smoothbounds} become \[ \tfrac{\sigma_{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}\|s-s'\|^2}{1-\gamma\sigma_{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}} {}\leq{} \innprod{\nabla\env(s)-\nabla\env(s')}{s-s'} {}\leq{} \tfrac{\gamma^{-1}\|s-s'\|^2}{1+\gamma\sigma_{-f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}}. \] Since \(|\sigma_{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}|,|\sigma_{-f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}|\leq L_{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}\), the claimed smoothness follows. Finally, if \(f}%{{\mathchoice{F}{F}{{\ss F}}{F}}\) is convex then \(\sigma_{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}\) is nonnegative and thus so is the lower bound above, proving convexity of \(\env\). \end{proof} \end{lem} \section{The algorithm}\label{sec:Algorithm} Having assessed the \(\frac1\gamma\)-smoothness of \(\env\) and its connection with problem \eqref{eq:P} in \cref{thm:env}, the minimization of the nonsmooth DC function \(\varphi=g}%{\mathchoice{G}{G}{{\ss G}}{G}}-h}%{\mathchoice{G}{G}{{\ss G}}{G}}\) can be carried out with a gradient descent with constant stepsize \(\tau<2\gamma\) on \(\env\). As shown in the next result, this is precisely \cref{alg:P}. \begin{thm}\label{thm:alg}% Suppose that \cref{ass:P} holds, and starting from \(s^0\in\R^n\) consider the iterates \(\seq{s^k,u^k,v^k}\) generated by \cref{alg:P} with \(\gamma>0\) and \(\lambda\in(0,2)\). Then, for every \(k\in\N\) it holds that \( s^{k+1} {}={} s^k {}-{} \gamma\lambda\nabla\env(s^k) \) and \begin{equation}\label{eq:SD} \env(s^{k+1}) {}\leq{} \env(s^k) {}-{} \tfrac{\lambda(2-\lambda)}{2\gamma} \|u^k-v^k\|^2. \end{equation} In particular: \begin{enumerate} \item\label{thm:alg:res} the fixed-point residual vanishes with \(\min_{i\leq k}\|u^i-v^i\|=o(\nicefrac{1}{\sqrt k})\); \item\label{thm:alg:omega}% \(\seq{u^k}\) and \(\seq{v^k}\) have the same set of cluster points, be it \(\Omega\); when \(\seq{s^k}\) is bounded, every \(u_\star\in\Omega\) is stationary for \(\varphi\) (in the sense of \eqref{eq:stationary}) and \(\varphi\) is constant on \(\Omega\), the value being the (finite) limit of the sequences \(\seq{\env(s^k)}\) and \(\seq{\varphi(v^k)}\);% \item\label{thm:alg:bounded} if \(\varphi\) is coercive, then \(\seq{s^k,u^k,v^k}\) is bounded. \end{enumerate} \begin{proof} That \(s^{k+1}=s^k-\lambda\gamma\nabla\env(s^k)\) follows from \cref{thm:smooth}. The proof is now standard, see \eg \cite{bertsekas2016nonlinear}: \(\frac1\gamma\)-smoothness implies the upper bound \begin{align*} \env(s^{k+1}) {}\leq{} & \env(s^k) {}+{} \innprod{\nabla\env(s^k)}{s^{k+1}-s^k} \\ & {}+{} \tfrac{1}{2\gamma}\|s^{k+1}-s^k\|^2 \\ {}={} & \env(s^k) {}-{} \tfrac{\lambda(2-\lambda)}{2\gamma} \|u^k-v^k\|^2, \end{align*} which is \eqref{eq:SD}. We now show the numbered claims. \begin{proofitemize} \item\ref{thm:alg:res} By telescoping \eqref{eq:SD} and using the fact that \(\inf\env=\inf\varphi>-\infty\) owing to \cref{thm:min,ass:phi}, we obtain that the sequence of squared residuals \(\seq{\|u^k-v^k\|^2}\) has finte sum, hence the claim. \item\ref{thm:alg:omega} That the sequences have same cluster points follows from assertion \ref{thm:alg:res}. Moreover, \eqref{eq:SD} and the lower boundedness of \(\env\) imply that the sequence \(\seq{\env(s^k)}\) monotonically decreases to a finite value, be it \(\varphi_\star\). Continuity of \(\env\) then implies that \(\env(s_\star)=\varphi_\star\) for every limit point \(s_\star\) of \(\seq{s^k}\). If \(\seq{s^k}\) is bounded, then so are \(\seq{u^k}\) and \(\seq{v^k}\) owing to Lipschitz continuity of the proximal mappings. Moreover, for every \(k\) one has \(s^k=u^k+\gamma\xi^k=v^k+\gamma\eta^k\) for some \(\xi^k\in\partialh}%{\mathchoice{G}{G}{{\ss G}}{G}}(u^k)\) and \(\eta^k\in\partialg}%{\mathchoice{G}{G}{{\ss G}}{G}}(v^k)\). Necessarily, the sequences of subgradients are bounded, and for any limit point \(u_\star\) of \(\seq{u^k}\), up to possibly extracting, we have that \(u_\star=\prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s_\star)=\prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s_\star)\) for some cluster point \(s_\star\) of \(\seq{s^k}\). By invoking \cref{thm:optimality} we conclude that \(\varphi(u_\star)=\varphi_\star\). \item\ref{thm:alg:bounded} Boundedness of \(\seq{s^k}\) follows from the fact that \(\env(s^k)\leq\env(s^0)\) for all \(k\), owing to \eqref{eq:SD}. In turn, boundedness of \(\seq{u^k}\) and \(\seq{v^k}\) follows from Lipschitz continuity of the proximal mappings. \qedhere \end{proofitemize} \end{proof} \end{thm} The remainder of the section is devoted to deriving \cref{alg:P3} as a special instance of \cref{alg:P} applied to the problem reformulation \eqref{eq:P3DC}. In order to formalize this derivation, we first need to address a minor technicality arising because of the nonconvexity of function \(\H\) therein, which prevents a direct application of \cref{alg:P} to the function decomposition \(G-\H\). Fortunately however, by simply adding a quadratic term to both \(G\) and \(\H\) the desired DC formulation is obtained without actually changing the cost function \(\Phi\) in problem \eqref{eq:P3DC}. This simple issue is addressed next. \subsection{Strongly and hypoconvex functions}\label{sec:hypo} Clearly, adding a same quantity to both functions \(g}%{\mathchoice{G}{G}{{\ss G}}{G}}\) and \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}\) leaves problem \eqref{eq:P} unchanged. In particular, the convexity setting of \cref{ass:P} can also be achieved when \(g}%{\mathchoice{G}{G}{{\ss G}}{G}}\) and \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}\) are \emph{hypoconvex}, in the sense that they are convex up to adding a suitably large quadratic function. Recall that for \(\tildef}%{\mathchoice{G}{G}{{\ss G}}{G}}=f}%{\mathchoice{G}{G}{{\ss G}}{G}}+\tfrac\mu2\|{}\cdot{}\|^2\) it holds that \( \prox_{\tilde\gamma\tildef}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s) {}={} \prox_{\gammaf}%{\mathchoice{G}{G}{{\ss G}}{G}}}(\tfrac{s}{1+\tilde\gamma\mu}) \) for \(\gamma=\frac{\tilde\gamma}{1+\tilde\gamma\mu}\) \cite[Prop. 24.8(i)]{bauschke2017convex}. Therefore, as long as there exists \(\mu\in\R\) such that both \(g}%{\mathchoice{G}{G}{{\ss G}}{G}}+\tfrac\mu2\|{}\cdot{}\|^2\) and \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}+\tfrac\mu2\|{}\cdot{}\|^2\) are convex functions, one can apply iterations \eqref{eq:alg} to the minimization of \( g}%{\mathchoice{G}{G}{{\ss G}}{G}}+\tfrac\mu2\|{}\cdot{}\|^2 {}-{} \left( h}%{\mathchoice{G}{G}{{\ss G}}{G}}+\tfrac\mu2\|{}\cdot{}\|^2 \right) \) to obtain \[ \begin{cases}[l @{{}={}} l] u^k & \prox_{\tilde\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}(\tilde s^k) \\ v^k & \prox_{\tilde\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(\tilde s^k) \\[2pt] \tilde s^{k+1} & \tilde s^k+\tilde\lambda(v^k-u^k), \end{cases} \] where \(\tilde\gamma\coloneqq\frac{\gamma}{1+\gamma\mu}\), \(\tilde s^k\coloneqq\frac{1}{1+\gamma\mu}s^k\), and \(\tilde\lambda\coloneqq\frac{1}{1+\gamma\mu}\lambda\). By observing that \( \frac{\gamma}{1+\gamma\mu} \) ranges in \((0,\nicefrac1\mu)\) for \(\gamma\in(0,\infty)\) (with the convention \(\nicefrac10=\infty\)), and that \( \tilde\lambda {}={} \lambda(1-\tilde\gamma\mu) \), we obtain the following. \begin{rem}[\emph{Strongly} convex and \emph{hypo}convex functions]\label{thm:hypo}% If \(\mu\in\R\) is such that both \(g}%{\mathchoice{G}{G}{{\ss G}}{G}}+\tfrac\mu2\|{}\cdot{}\|^2\) and \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}+\tfrac\mu2\|{}\cdot{}\|^2\) are convex functions, then all the numbered claims of \cref{thm:alg} still hold provided that \(0<\lambda<2(1-\gamma\mu)\). \end{rem} As a final step towards the analysis of \cref{alg:P3}, in the next subsection we motivate the presence of the two additional parameters \(\delta\) and \(\mu\) missing in \cref{alg:P}. \subsection{Matrix stepsize and relaxation} A substantial degree of flexibility can be introduced by replacing the quadratic term \(\tfrac{1}{2\gamma}\|w-{}\cdot{}\|^2\) appearing in the definition \eqref{eq:prox} of the proximal mapping with the squared norm \(\tfrac12\|w-{}\cdot{}\|_{\Gamma^{-1}}^2\) induced by a matrix \(\Gamma\in\symm_{++}(\R^p)\). The scalar stepsize \(\gamma\) is achieved by considering \(\Gamma=\gamma\I\); in general, we may thus think of \(\Gamma\) as a matrix stepsize. Denoting \begin{align}\label{eq:PROX} \prox_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}^\Gamma(x) {}={} & \argmin_w\set{f}%{\mathchoice{G}{G}{{\ss G}}{G}}(w)+\tfrac12\|w-x\|_{\Gamma^{-1}}^2} \shortintertext{and} f}%{\mathchoice{G}{G}{{\ss G}}{G}}^\Gamma(x) {}={} & \min_w\set{f}%{\mathchoice{G}{G}{{\ss G}}{G}}(w)+\tfrac12\|w-x\|_{\Gamma^{-1}}^2} \end{align} the corresponding Moreau envelope, as shown in \cite[Thm. 4.1.4]{hiriarturruty1993convex} we have that \(\nablaf}%{\mathchoice{G}{G}{{\ss G}}{G}}^\Gamma=\Gamma^{-1}(\id-\prox_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}^\Gamma)\) satisfies \[ 0 {}\leq{} \innprod{\nablaf}%{\mathchoice{G}{G}{{\ss G}}{G}}^\Gamma(s)-\nablaf}%{\mathchoice{G}{G}{{\ss G}}{G}}^\Gamma(s')}{s-s'} {}\leq{} \|s-s'\|_{\Gamma^{-1}}^2. \] \begin{rem}[Matrix stepsizes and relaxations]\label{thm:matrix}% \renewcommand{\gamma}{\Gamma}% Under \cref{ass:P}, given a diagonal stepsize \(\Gamma\in\symm_{++}(\R^p)\) and a diagonal relaxation \(\Lambda\in\symm_{++}(\R^p)\) the iterations \begin{equation}\label{eq:ALG} \begin{cases}[l @{{}={}} l] u^k & \prox_{h}%{\mathchoice{G}{G}{{\ss G}}{G}}}^\Gamma(s^k) \\[2pt] v^k & \prox_{g}%{\mathchoice{G}{G}{{\ss G}}{G}}}^\Gamma(s^k) \\[2pt] s^{k+1} & s^k+\Lambda(v^k-u^k) \end{cases} \end{equation} produce a sequence such that \[ \env(s^{k+1}) {}\leq{} \env(s^k) {}-{} \tfrac12 \|u^k-v^k\|_{(2\I-\Lambda)\Gamma^{-1}\Lambda}^2. \] In particular, all the numbered claims of \cref{thm:alg} still hold when \( 0\prec\Lambda\prec 2\I \).\footnote{% Although similar claims can be made for more general positive definite matrices, the diagonal requirement guarantees the symmetry of \((2\I-\Lambda)\Gamma^{-1}\Lambda\) and thus its positive definiteness for \(\Lambda\) as prescribed above.% }% \end{rem} Notice that the optimality condition for minimization problem \eqref{eq:PROX} reads \( 0 {}\in{} \partialf}%{\mathchoice{G}{G}{{\ss G}}{G}}(w) {}+{} \Gamma^{-1}(w-x) \). Equivalently, \begin{equation}\label{eq:PROXequiv} w=\prox_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}^\Gamma(x) \quad\Leftrightarrow\quad x\in w+\Gamma\partialf}%{\mathchoice{G}{G}{{\ss G}}{G}}(w). \end{equation} By using this fact, if a symmetric matrix \(M\) is such that the function \(\tildef}%{\mathchoice{G}{G}{{\ss G}}{G}}=f}%{\mathchoice{G}{G}{{\ss G}}{G}}+\tfrac12\innprod{{}\cdot{}}{M{}\cdot{}}\) is convex, one can express its proximal map in terms of that of \(f}%{\mathchoice{G}{G}{{\ss G}}{G}}\) in a similar fashion as the scalar case considered in \cref{sec:hypo}, namely, \[ \prox_{\tildef}%{\mathchoice{G}{G}{{\ss G}}{G}}}^{\tilde\Gamma} {}={} \prox_{f}%{\mathchoice{G}{G}{{\ss G}}{G}}}^{\Gamma}\circ(\I-\Gamma M) \] with \(\Gamma=(\tilde\Gamma^{-1}+M)^{-1}\).\footnote{% These expressions in terms of the new stepsize \(\Gamma\) use the matrix identities \( (\I+\tilde\Gamma M)^{-1}\tilde\Gamma {}={} (\tilde\Gamma^{-1}+M)^{-1} \) and \( (\I+\tilde\Gamma M)^{-1} {}={} \I-\Gamma M \) for \( \Gamma=(\I+\tilde\Gamma M)^{-1}\tilde\Gamma \). } It is thus possible to combine \cref{thm:hypo,thm:matrix} as follows, where again for simplicity we restrict the case to diagonal matrices. \begin{rem}\label{thm:hypomatrix}% \renewcommand{\gamma}{\Gamma}% If a diagonal matrix \(M\) is such that both functions \(g}%{\mathchoice{G}{G}{{\ss G}}{G}}+\tfrac12\innprod{{}\cdot{}}{M{}\cdot{}}\) and \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}+\tfrac12\innprod{{}\cdot{}}{M{}\cdot{}}\) are convex, then the sequence produced by \eqref{eq:ALG} satisfies all the numbered claims of \cref{thm:alg} as long as \( 0 {}\prec{} \Lambda {}\prec{} 2(\I-\Gamma M) \). \end{rem} \subsection{A parallel three-prox splitting}\label{sec:3splitting} After the generalization documented in \cref{thm:hypomatrix} we are ready to address the formulation \eqref{eq:P3} and express \cref{alg:P3} as a ``scaled'' variant of \cref{alg:P}. We begin by rigorously framing the problem setting. \begin{ass}\label{ass:P3} In problem \eqref{eq:P3} \begin{enumeratass} \item \(\func{f,g,h}{\R^n}{\Rinf}\) are proper, lsc, and convex; \item \(\varphi\) is lower bounded. \end{enumeratass} \end{ass} \begin{thm}\label{thm:alg3}% \renewcommand{\g}{G}% \renewcommand{\h}{\H}% \renewcommand{\gamma}{\Gamma}% Let \cref{ass:P3} hold, and starting from \((s^0,t^0)\in\R^n\times\R^n\) consider the iterates \(\seq{s^k,t^k,u^k,v^k,z^k}\) generated by \cref{alg:P3} with \(0<\gamma<1<\delta\), \(0<\lambda<2(1-\gamma)\) and \(0<\mu<2(1-\delta^{-1})\). Then, denoting \begin{align*} \Psi(s,t) {}={} & \env(s,t\nicefrac{}{\delta}) \\ \numberthis\label{eq:Psi3} {}={} & g}%{\mathchoice{G}{G}{{\ss G}}{G}}^\gamma(s) {}-{} f}%{{\mathchoice{F}{F}{{\ss F}}{F}}^\delta(t) {}-{} h}%{\mathchoice{G}{G}{{\ss G}}{G}}^{\frac{\gamma\delta}{\delta-\gamma}}\bigl(\tfrac{\delta s-\gamma t}{\delta-\gamma}\bigr) {}+{} \tfrac{1}{2(\delta-\gamma)}\|s-t\|^2, \end{align*} for every \(k\in\N\) it holds that \begin{equation}\label{eq:P3:GD} \textstyle \binom{s^{k+1}}{t^{k+1}} {}={} \binom{s^k}{t^k} {}-{} \binom{\gamma\lambda\I~~\phantom{\delta\mu\I}}{\phantom{\gamma\lambda\I}~~\delta\mu\I} \nabla\Psi(s^k,t^k). \end{equation} Moreover \begin{enumerate} \item\label{thm:alg3:res} the fixed-point residual vanishes with \(\min_{i\leq k}\|\binom{u^i-v^i}{u^i-z^i}\|=o(\nicefrac{1}{\sqrt k})\); \item\label{thm:alg3:omega}% \(\seq{u^k}\) \(\seq{v^k}\) and \(\seq{z^k}\) have the same set of cluster points, be it \(\Omega\); when \(\seq{s^k}\) is bounded, every \(u_\star\in\Omega\) satisfies the stationarity condition \[ \emptyset {}\neq{} \partialg}%{\mathchoice{G}{G}{{\ss G}}{G}}(u_\star) {}\cap{} \bigl( \partialf}%{{\mathchoice{F}{F}{{\ss F}}{F}}(u_\star) {}+{} \partialh}%{\mathchoice{G}{G}{{\ss G}}{G}}(u_\star) \bigr) {}\subseteq{} \partialg}%{\mathchoice{G}{G}{{\ss G}}{G}}(u_\star) {}\cap{} \partial(f}%{{\mathchoice{F}{F}{{\ss F}}{F}}+h}%{\mathchoice{G}{G}{{\ss G}}{G}})(u_\star) \] and \(\varphi\) is constant on \(\Omega\), the value being the (finite) limit of the sequence \(\seq{\varphi(u^k)}\);% \item\label{thm:alg3:bounded} if \(\varphi\) is coercive, then \(\seq{s^k,t^k,u^k,v^k,z^k}\) is bounded. \end{enumerate} \begin{proof} Let \(\Phi\), \(G\) and \(\H\) be as in \eqref{eq:P3DC}, and observe that \[ \Phi(x,y) {}\geq{} \inf_{y'}\Phi(x,y') {}={} \varphi(x). \] In particular, if \(\varphi\) is coercive then necessarily so is \(\Phi\). Let \( \Gamma {}\coloneqq{} \binom{\gamma\I~~\phantom{\delta^{-1}\I}}{\phantom{\gamma\I}~~\delta^{-1}\I} \). Under \cref{ass:P3}, function \(G\) is convex and one can easily verify that \begin{align*} (v_s,v_t) {}={} \prox_{G}^\Gamma(s,t) {}\Leftrightarrow{} & \begin{cases} v_s {}={} \prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s) \\ v_t {}={} t-\delta^{-1}\prox_{\deltaf}%{{\mathchoice{F}{F}{{\ss F}}{F}}}(\delta t) \end{cases} \shortintertext{% in light of the Moreau identity \( \prox_{\nicefrac{\conj{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}}}{\delta}}(t) {}={} t-\delta^{-1}\prox_{\deltaf}%{{\mathchoice{F}{F}{{\ss F}}{F}}}(\delta t) \), see \cite[Thm. 14.3(ii)]{bauschke2017convex}. Furthermore, from \eqref{eq:PROXequiv} we have } (u_s,u_t) {}={} \prox_{\H}^\Gamma(s,t) {}\Leftrightarrow{}& \begin{cases} s {}\in{} u_s+\gamma\partialh}%{\mathchoice{G}{G}{{\ss G}}{G}}(u_s)+\gamma u_t \\ t {}={} u_t+u_s\nicefrac{}{\delta} \end{cases} \\ {}\Leftrightarrow{}& \begin{cases} \frac{s-\gamma t}{1-\nicefrac\gamma\delta} {}\in{} u_s+\frac{\gamma}{1-\nicefrac\gamma\delta}\partialh}%{\mathchoice{G}{G}{{\ss G}}{G}}(u_s) \\ u_t {}={} t-u_s\nicefrac{}{\delta} \end{cases} \\ {}\Leftrightarrow{}& \begin{cases} u_s {}={} \prox_{\frac{\gamma\delta}{\delta-\gamma}h}%{\mathchoice{G}{G}{{\ss G}}{G}}}\bigl( \frac{\delta s-\gamma\delta t}{\delta-\gamma} \bigr) \\ u_t {}={} t-u_s\nicefrac{}{\delta}. \end{cases} \end{align*} In particular, \[\textstyle \binom{s}{\delta t} {}+{} \binom{\lambda\I~~\phantom{\delta\mu\I}}{\phantom{\lambda\I}~~\delta\mu\I} \left( \prox_{G}^\Gamma\binom st {}-{} \prox_{\H}^\Gamma\binom st \right) {}={} \binom{ s+\lambda(v_s-u_s) }{ \delta t+\mu(u_s-\prox_{\deltaf}%{{\mathchoice{F}{F}{{\ss F}}{F}}}(\delta t) }. \] Apparently, iterations \eqref{eq:alg3} correspond to those in \eqref{eq:ALG} with \( \Lambda {}\coloneqq{} \binom{\lambda\I~~\phantom{\mu\I}}{\phantom{\lambda\I}~~\mu\I} \) after the scaling \(t\gets\nicefrac t\delta\). From these computations and using the fact that \( (\conj{f}%{{\mathchoice{F}{F}{{\ss F}}{F}}})^{\nicefrac1\delta}\circ\nicefrac{\id}{\delta} {}={} \tfrac{1}{2\delta}\|{}\cdot{}\|^2 {}-{} f}%{{\mathchoice{F}{F}{{\ss F}}{F}}^\delta \), see \cite[Thm. 14.3(i)]{bauschke2017convex}, the expressions in \eqref{eq:Psi3} and \eqref{eq:P3:GD} are obtained. Since function \(\H+\tfrac12\|{}\cdot{}\|^2\) is convex --- that is, the setting of \cref{thm:hypomatrix} is satisfied with \(M=\I\) --- and the condition \( 0 {}\prec{} \Lambda {}\prec{} 2(\I-\Gamma) \) holds when \(\gamma,\delta,\lambda,\mu\) are as in the statement, it only remains to show that the limit points satisfy the stationarity condition of assertion \ref{thm:alg3:omega}, as the rest of the proof follows from \cref{thm:alg:res,thm:hypomatrix}. To this end, since \( \binom{v^k-u^k}{u^k-z^k} {}={} \binom{s^{k+1}-s^k}{t^{k+1}-t^k} {}\to{} 0 \) the sequences \(\seq{u^k}\) \(\seq{v^k}\) and \(\seq{z^k}\) have the same cluster points. If \(\seq{s^k}\) is bounded, arguing as in the proof of \cref{thm:alg:omega} we have that if \(u^k\to u_\star\) as \(k\in K\) for an infinite set of indices \(K\subseteq\N\), necessarily also \(v^k\to u_\star\) as \(k\in K\), and \((s^k,t^k)\to (s_\star,t_\star)\) as \(k\in K\) for some \(s_\star,t_\star\) such that \[ \prox_{\frac{\gamma\delta}{\delta-\gamma}h}%{\mathchoice{G}{G}{{\ss G}}{G}}} \bigl( \tfrac{\delta s_\star-\gamma t_\star}{\delta-\gamma} \bigr) {}={} \prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s_\star) {}={} \prox_{\deltaf}%{{\mathchoice{F}{F}{{\ss F}}{F}}}(t_\star). \] We then conclude from \cref{thm:proxgEquiv} that \[ \frac{ \frac{\delta s_\star-\gamma t_\star}{\delta-\gamma} {}-{} u_\star }{ \frac{\gamma\delta}{\delta-\gamma} } {}\in{} \partialh}%{\mathchoice{G}{G}{{\ss G}}{G}}(u_\star), ~ \frac{s_\star-u_\star}{\gamma} {}\in{} \partialg}%{\mathchoice{G}{G}{{\ss G}}{G}}(u_\star), ~ \frac{t_\star-u_\star}{\delta} {}\in{} \partialf}%{{\mathchoice{F}{F}{{\ss F}}{F}}(u_\star), \] which gives \[ \tfrac{s_\star-u_\star}{\gamma} {}\in{} \partialg}%{\mathchoice{G}{G}{{\ss G}}{G}}(u_\star) {}\cap{} \left( \partialf}%{{\mathchoice{F}{F}{{\ss F}}{F}}(u_\star) {}+{} \partialh}%{\mathchoice{G}{G}{{\ss G}}{G}}(u_\star) \right), \] and the claimed stationarity condition follows from the inclusion \( \partialf}%{{\mathchoice{F}{F}{{\ss F}}{F}}+\partialh}%{\mathchoice{G}{G}{{\ss G}}{G}} {}\subseteq{} \partial(f}%{{\mathchoice{F}{F}{{\ss F}}{F}}+h}%{\mathchoice{G}{G}{{\ss G}}{G}}) \), see \cite[Thm. 23.8]{rockafellar1970convex}. \end{proof} \end{thm} \section{Simulations}\label{sec:Simulations} We study the performance of \cref{alg:P} applied to a sparse principal component analysis (SPCA) problem. Following \cite[\S2.1]{journee2010generalized}, an SPCA problem can be formulated as \begin{equation}\label{eq:PCA} \minimize -\tfrac12\trans s\Sigma s + \kappa\|s\|_1 \quad\stt{} s\in\cball01 \end{equation} with \(\cball01\coloneqq\set{s}[\|s\|\leq 1]\), \(\Sigma = \trans AA\) the sample covariance matrix, and \(\kappa\) a sparsity inducing parameter. This problem can be identified as a DC problem of type \eqref{eq:P} by denoting \(g}%{\mathchoice{G}{G}{{\ss G}}{G}}(s) = \kappa\|s\|_1 + \indicator_{\cball01}(s)\) and \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}(s) = \tfrac{1}{2}\trans s\Sigma s\), where \(\indicator_C\) denotes the indicator function of a (nonempty closed convex) set \(C\), namely \(\indicator_C(x)=0\) if \(x\in C\) and \(\infty\) otherwise. Then, \begin{align*} \prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s) {}={} & (\I +\gamma\Sigma)^{-1}s, ~~\text{and} \\ \prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s) {}={} & \frac{\sign(s)\odot[|s|-\kappa\gamma{\bf 1}]_+}{\max\set{1,\|[|s|-\kappa\gamma{\bf 1}]_+\|}}, \end{align*} with \(\odot\) the elementwise multiplication, \(|{}\cdot{}|\) the elementwise absolute value, and \({\bf 1}\) the \(\R^n\)-vector of all ones. To \eqref{eq:PCA} we applied FBS, DRS, DCA and \cref{alg:P} (gradient descent on the DCE) with L-BFGS steps and Wolfe backtracking. Sparse random matrices \(A\in\R^{20n\times n}\) with 10\% nonzeros were generated for 11 values of \(n\) on a linear scale between 100 and 1000, with a sufficiently small \(\kappa\) \cite[\S2.1]{journee2010generalized}. The mean number of iterations required by the solvers over these instances is reported in the first column of \cref{fig:iterations}. A stepsize \(\gamma = 0.9\lambda_\textrm{max}^{-1} (\Sigma) \) was selected for \cref{alg:P} and FBS, and \(\gamma = 0.45\lambda_\textrm{max}^{-1} (\Sigma) \) for DRS consistently with the nonconvex analysis in \cite{themelis2020douglas}. Stepsize tuning might lead to a better performance of these algorithms but was not considered here. The termination criterion \(\|\prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s) - \prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}(s)\| \leq 10^{-6}\) was used for all solvers. Plain \cref{alg:P} (without L-BFGS) always exceeded 1000 iterations. \begin{figure} \centering \includetikz[width=8.0cm]{bar_paper} \caption{% Iteration comparison for random instances of \eqref{eq:PCA}.% }% \label{fig:iterations} \end{figure} \Cref{fig:iterations} also lists the complexity in terms of function calls. Evaluating \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}\) and \(\nablah}%{\mathchoice{G}{G}{{\ss G}}{G}}\) requires a matrix-vector product, which is \(O(n^2)\) operations. By factorizing \(\I +\gamma\Sigma\) once offline, each backsolve to compute \(\prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}\) also requires \(O(n^2)\) operations. Finally, \(\prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}\) requires \(2n\) comparisons and a norm-operation, and is clearly the least expensive operation. DCA and FBS need one \(\nablah}%{\mathchoice{G}{G}{{\ss G}}{G}}\) and one \(\prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}\) (or similar) operation, and DRS one \(\prox_{-\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}\) (work equivalent to \(\prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}\)) and one \(\prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}\) operation per iteration. \cref{alg:P} requires one \(\prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}\) and one \(\prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}\) operation per iteration, and L-BFGS needs additionally one call to \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}\), \(\prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}\) and \(\prox_{\gammag}%{\mathchoice{G}{G}{{\ss G}}{G}}}\) per trial stepsize in the linesearch. However, as \(h}%{\mathchoice{G}{G}{{\ss G}}{G}}\) and \(\prox_{\gammah}%{\mathchoice{G}{G}{{\ss G}}{G}}}\) involve linear operations for this particular problem, only one evaluation is required during the whole linesearch. Furthermore, in practice, it was observed that a stepsize of 1 was almost always accepted. From \cref{fig:iterations} it follows, therefore, that \cref{alg:P} with L-BFGS requires less work to converge than the other methods, disregarding the one time factorization cost not present in FBS and DCA. \section{Conclusions}\label{sec:Conclusions} By reshaping nonsmooth DC problems into the minimization of the smooth DC envelope function (DCE), a gradient method yields a new algorithm for DC programming. The algorithm is of splitting type, involving (subgradient-free, proximal) operations on each component which, additionally, can be carried out in parallel at each iteration. The smooth reinterpretation naturally leads to the possibility of Newton-type acceleration techniques which can significantly affect the convergence speed. The DCE has also a theoretical appeal in its deep kinship with the forward-backward envelope, as it is shown to be a reparametrization with more favorable reguarity properties. We believe that this connection may be a valuable tool for relaxing assumptions in FBE-based algorithms, which is planned for future work. \bibliographystyle{plain}
2004.00051
\section{Introduction} Accreting X-ray Pulsars (XRPs) were discovered almost $50\,$yr ago, when X-ray pulsation was detected from Cen~X-3 and Her~X-1 \citep{Giacconi1971, Tananbaum1972}, and subsequently interpreted as a rotating, magnetized Neutron Star (NS) accreting the stellar wind expelled by a donor companion star \citep{PringleRees1972,DavidsonOstriker1973,LPP1973}. For magnetized NSs, where the magnetic field strength is $B\sim10^{12}\,$G, the stellar wind flow is disrupted by the magnetic pressure and channeled to the magnetic polar caps, the so-called ``hotspots''. Here, the potential energy is converted into X-ray radiation with a luminosity L$_{acc}$ of: \begin{equation} L_{acc} \approx \frac{G\,M_{NS}\,\dot{m}}{R_{NS}} \end{equation} where $M_{\rm NS}=1.4\,M_{\sun}$ and $R_{\rm NS}=10\,$km are the mass and radius, respectively, of a typical NS, and $\dot{m}$ is the mass accretion rate. The study of an XRP system consisting of a magnetized, accreting compact object and an optical companion is key to understanding the behavior of matter under extreme conditions and for probing the evolutionary paths of both the binary system and its individual components. The NS represents the final stage of the evolutionary track of massive stars as a supernova remnant, characterized by extreme densities, high magnetic and gravitational fields, and a very small moment of inertia. Therefore, knowledge is required from multiple scientific disciplines in order to describe NSs in an astrophysical context (i.e., accretion processes, plasma physics, nuclear physics, electrodynamics, general relativity, and quantum theory). Furthermore, the presence of a donor companion makes these systems excellent laboratories for the study of additional astrophysical processes, such as the stellar wind environment, radiative effects, and matter transfer. Finally, the presence of an orbiting XRP makes these objects invaluable tools for the characterization of orbital elements and component masses. The Milky Way and the Magellanic Clouds contain $\sim230$ XRPs\footnote{\url{http://www.iasfbo.inaf.it/~mauro/pulsar_list.html}}. Recent reviews on XRPs and their observational properties are given in \citet{Caballero+Wilms12}, \citet{Walter+Ferrigno16}, \citet{Maitra17}, \citet{Paul+17}. These systems have pulse periods that range from several ms to hours. For approximately half of those systems, their X-ray activity has only been observed serendipitously during transient episodes. In this paper, we review more than 10 yr of observations of XRPs with the Gamma-Ray Burst Monitor (GBM), an all-sky, transient monitor aboard the Fermi observatory. The wide field of view and high timing capability of GBM are particularly suited to the continuous study of both transient and persistent XRPs. As of 2019 November, GBM has detected a total of $39$ XRPs. This paper is organized as follows. In Sect.~\ref{sec:physics} we briefly review the accretion physics onto magnetized compact objects. In Sect.~\ref{sec:gbm} we describe the GBM instrument and its data handling. In Sect.~\ref{sec:timing} we describe the timing analysis applied to the GBM raw data in order to obtain its data products. In Sect.~\ref{sec:overview} we give an overview of the binary systems observed by GBM and the type of X-ray activity that characterizes those systems. In Sect.~\ref{sec:individuals} we describe each XRP system, providing a summary of their spin history as seen previously by other observatories and currently by GBM. Finally, in Sect.~\ref{sec:discussion} we discuss the main results from the population study and from single systems in the most interesting cases. We summarize the importance of the GBM Pulsar Project in Sect.~\ref{sec:summary}. As the estimation of the spectral luminosity and measurement of the distance from the sources are important aspects of this work, we describe them separately in the Appendices~\ref{sec:bolo} and ~\ref{sec:gaiadist}, respectively. \section{Accretion Physics onto Magnetized Neutron Stars}\label{sec:physics} When accretion occurs onto a magnetized NS, the accreted material does not flow smoothly onto the surface of the compact object but is mediated by the NS's magnetic field \citep{PringleRees1972}. At a certain distance from the NS surface, namely at the Alfv\'{e}n radius $r_{\rm A}$, the energy density of the magnetic field balances the kinetic energy density of the infalling material: \begin{equation} r_A= \left(\frac{\mu^4}{2GM\dot{M}^2}\right)^{1/7}=6.8\times10^{8}\,\dot{M}_{10}^{-2/7}\,\mu^{4/7}_{30}\,M_{1.4}^{-1/7}\,cm \end{equation} where $\dot{M}_{10}$ is the accreted mass in units of $10^{-10}\,M_\odot\,$yr$^{-1}$, $\mu_{30}$ is the magnetic moment in units of $10^{30}\,$G\,cm$^{3}$ (corresponding to a typical magnetic field strength of $10^{12}\,$G at the NS surface), and $M_{1.4}$ is the mass of the NS in units of $1.4\,M_\odot$. The material can then penetrate the NS magnetic field via Rayleigh-Taylor and Kelvin-Helmholtz instabilities and when accretion is mediated through an accretion disk via magnetic field reconnection with small-scale fields in the disk and turbulent diffusion (\citealt{Arons76, GLa,Kulkarni08}, and references therein). In a disk, the magnetic threading produces a broad transition zone composed of two regions: a broad outer zone, where the disk angular velocity is nearly Keplerian, and a narrow inner zone or boundary layer, where the disk angular velocity significantly departs from the Keplerian value. The outer radius of the boundary layer is identified as the magnetospheric radius $r_{\rm m}$: \begin{equation} r_m = k\,r_A \end{equation} where the dimensionless parameter $k$, also called the coupling factor, is ${\sim}0.5$ as given by \citet{GLa}, but it ranges from $0.3$ to $1$ in later models (\citealt{Wang96, Li97, Li+Wang99, Long+05, Bessolaz08, Zanni13, Dallosso16}, and references therein). It can be more generally considered as a function of the accretion rate, $k(\dot{M})$, and of the inclination angle between the neutron star rotation and magnetic field axes, and it can be significantly smaller than that obtained in the model by \citet[see, e.g., \citealt{Bozzo09}]{GLa}. Disk-driven accretion can be inhibited by a centrifugal barrier if the pulsar magnetosphere rotates faster than the Keplerian velocity of the matter in the disk. This condition is realized when the inner disk radius, coincident with the magnetospheric radius $r_{m}$, is greater than the co-rotation radius, $r_{c\rm o}$, at which the Keplerian angular disk velocity $\omega\r_{\rm co}$ is equal to the angular velocity of the NS, $\sqrt[]{GM_{NS}/r_{co}}$: \begin{equation} r_{co}=\left(\frac{GM_{NS}\,P_{s}^2}{4\pi^2}\right)^{1/3} \end{equation} where $\omega=2\pi/P_s$ is the rotational frequency of the NS, and $P_{\rm s}$ is the NS spin. For a standard NS mass of $1.4\,M_\odot$, the co-rotation radius is of the order of $r_{\rm co}=1.7\times10^8\,P_{\rm s}^{2/3}\,$cm. The relative positions of these radii determine the accretion regime at work, driven by the possible onset of (either a magnetic or centrifugal) barrier that inhibits direct wind accretion, also called the ``gating'' mechanism \citep{Illarionov75, Stella85, Bozzo08}. Different regimes of plasma cooling also play a role in quasi-spherical wind-accretion onto slowly rotating NSs \citep{Shakura12, Shakura13,Shakura17}. In these systems, a hot shell forms above the NS magnetosphere and, depending on the mass accretion rate, can enter the magnetosphere either through inefficient radiative plasma cooling or by efficient Compton cooling. At the same time, the plasma mediates the angular momentum removal from the rotating magnetosphere by large-scale convective motions. When $r_{\rm m}>r_{\rm co}$, the centrifugal barrier rises and mass is propelled away or halted at the boundary, rather than being accreted. This carries angular momentum from the NS, which consequently begins to spin down, and conditions become favorable for accretion via the propeller mechanism as the star enters this regime~\citep{Illarionov75}. However, for $r_{\rm m}<r_{\rm co}$, matter and thus angular momentum, is transferred to the spinning NS. Accordingly, in the case of disk accretion, the total torque $N$ that the disk exerts on the NS is composed of two terms: \begin{equation}\label{eq:torque1} N = N_0 + N_{mag}. \end{equation} where $N_0=\dot{M}\,\sqrt[]{(G\,M\,r_{\rm m})}$ is the torque produced by the matter leaving the disk at r$_{\rm m}$ to accrete onto the NS, while $N_{mag}=-\int_{r_{\rm m}}^{\infty}B_\phi\,B_z\,r^2\,dr$ is the torque generated by the twisted magnetic field lines threading the disk outside r$_{\rm m}$. Following \citet{GLa,GLb}, the total torque in Eq.~\ref{eq:torque1} can also be expressed as \begin{equation} N = n(\omega_s)\,\dot{M}\,\sqrt[]{G\,M\,r_{co}}, \end{equation} where $n$ is a dimensionless torque that is a function of the fastness parameter $\omega_s$: \begin{equation} \omega_s=\frac{\nu_s}{\nu_k}=\left(\frac{r_{in}}{r_{co}}\right)^{3/2}, \end{equation} where $\nu_{s}$ and $\nu_k$ are the spin frequency and the Keplerian frequency, respectively. Accretion from a disk leads to a spin period derivative (\citealt{GL77,GLa,GLb}) equal to \begin{equation} -\dot{P}=5.0\times10^{-5}\,\mu^{2/7}_{30}\,n(\omega_s)\,R_{NS_6}^{6/7}\,M_{1.4}^{-3/7}\,I_{45}^{-1}\,P_s^2\,L^{6/7}_{37}\, s\,s^{-1}. \end{equation} where $R_{\rm NS_6}$ is the NS radius in units of $10^6\,$cm, I$_{45}$ is the NS moment of inertia in units of $10^{45}\,$g\,cm$^{2}$, and L$_{37}$ is the bolometric luminosity in the X-ray band (1-200 keV) in units of $10^{37}\,$erg\,s$^{-1}$. For $0<\omega_s<0.9$, a good approximation of the dimensionless torque is \citep{Klus13b}: \begin{equation} n(\omega_s) = 1.4\,(1-2.86\,\omega_s)\,(1-\omega_s)^{-1}, \end{equation} which, for $M_{\rm{NS}}=1.4\,M_\odot$ and $R_{\rm{NS}}=10\,$km, results in a torque \citep{Ho14} of \begin{equation}\label{eq:torque2} -\dot{P}=7.1\times10^{-5}\,\textrm{s\,yr}^{-1}\,k^{1/2}\times(1-\omega_s)\,\mu^{2/7}_{30}\,(P\,L^{3/7}_{37})^2. \end{equation} Eq.~\ref{eq:torque2} is widely used in the literature to study accretion disk related phenomena of spin derivatives observed in accretion XRPs. On the other hand, the quasi-spherical accretion model has been introduced to explain the behavior of wind-accreting systems that show long-term spin period evolution \citep{Gonzalez+12,Gonzalez-Galan18, Shakura12, Postnov2015GX304}. This model describes two different accretion regimes, separated by a critical mass accretion rate $y=\dot{M}/\dot{M}_{cr}$, corresponding to a luminosity of $4\times10^{36}\,$erg\,s$^{-1}$. At lower luminosities, an extended quasi-static shell is formed by the matter that is gravitationally captured by the NS and that is subsonically settled down onto the magnetosphere. The quasi-static shell mediates the exchange of angular momentum between the captured matter and the NS magnetosphere by turbulent viscosity and convective motions. Both spin-up and spin-down are possible in the subsonic regime, even if the specific angular momentum of the accreted matter is prograde. As the accretion rate increases above the critical value, the flow near the Alfv\'{e}n surface becomes supersonic and a freefall gap appears above the magnetosphere due to the strong Compton cooling, causing the accretion to become highly unstable. In this regime, depending on the sign of the specific angular momentum, either spin-up or spin-down is possible. The quasi-spherical accretion model also takes into account the coupling of the rotating matter with the magnetosphere at different regimes. A strong coupling regime is realized for rapidly rotating magnetospheres, in which the exchange of angular momentum between the accreted matter and the NS can be described as \begin{equation} I\dot{\omega} = K_{mag} + K_{surf} \end{equation}{} where $I$ is the NS's moment of inertia, $K_{\rm mag}$ is the contribution to the spin frequency evolution brought by the plasma-magnetosphere interactions at the Alfv\'{e}n radius and can be either positive or negative, and $K_{\rm surf}$ is the spin-up contribution due to the angular momentum returned by the matter accreted onto the NS (see Eq.s~17-19 in \citealt{Shakura12}). In the moderate coupling regime, a similar relation holds with different coupling coefficients (see Eq.s~27-29 in \citealt{Shakura12}). To determine the main dimensionless parameters, the model was used to fit observations from a few long-period pulsars. Accordingly, the spin-down rate $\dot{\omega}_{\rm sd}$ (where $\omega = 2\pi\nu$ is the angular frequency) observed in those systems is \citep{Postnov2015GX304} \begin{equation}\label{eq:QSAMdown} \begin{split} \dot{\omega}_{sd} \approx 10^{-8} [Hz\, d^{-1}]\, \Pi_{sd}\,\mu_{30}^{13/11}\,\left(\frac{\dot{M}}{10^{16}\,g/s}\right)^{3/11}\,\left(\frac{P_s}{100 s}\right)^{-1}\, \end{split} \end{equation} where $\Pi_{\rm sd}$ is a parameter of the model, usually in the range $5--10$, $\dot{M}$ is the mass accretion rate normalized for a typical luminosity of $10^{37}\,$erg\,s$^{-1}$ (assuming $L=0.1\dot{M}c^2$), $P_{\rm s}$ is the pulsar spin period, and $\mu_{30}$ is the magnetic moment $BR^3$ in units of $10^{30}$ G\,cm$^3$. The spin-up rate $\dot{\omega}_{\rm su}$ is \begin{equation}\label{eq:QSAMup} \begin{split} \dot{\omega}_{su} \approx 10^{-9} [Hz\, d^{-1}]\, \Pi_{su}\,\mu_{30}^{1/11}\,\left(\frac{P_{orb}}{10\,d}\right)^{-1}\, \left(\frac{\dot{M}}{10^{16}\,g/s}\right)^{7/11} \end{split} \end{equation} where $P_{\rm orb}$ is the binary orbital period, and $\Pi_{\rm su}\approx\Pi_{\rm sd}$ \citep{Shakura14a}. \section{The Fermi GBM X-Ray Monitor}\label{sec:gbm} GBM is an unfocused, background-dominated, all-sky instrument aboard the Fermi Gamma-ray Space Telescope \citep{Meegan2009}. It consists of $14$ uncollimated, inorganic scintillator detectors: $12$ thallium-doped sodium iodide (NaI) detectors and two bismuth germanate (BGO) detectors. The NaI detectors have an effective energy range of $\approx8\,$keV$-1\,$MeV, while the BGOs cover an energy range from $\approx200\,$keV$-40\,$MeV. As the emission of accreting pulsars is dominant only below $\sim100\,$keV, data from the BGO detectors will not be used in this work. The NaI detectors are arranged into four clusters of three detectors, placed around each corner of the spacecraft in such a fashion that any source un-occulted by the Earth will illuminate at least one cluster. GBM has three continuous (C) data types: CTIME data, with a nominal time resolution of $0.256\,$s and eight energy channels used for event detection and localization, CSPEC data, with a nominal time resolution of $4.096\,$s and $128$ energy channels used for spectral modeling, and continuous time tagged event (CTTE) data with timestamps for individual photon events ($2\,\mu$s precision) over $128$ energy channels. The latter has been available since 2012 November. Even though GBM is devoted to hunting Gamma-ray Bursts, it has proven to be an excellent tool in the monitoring of other transient X-ray sources as well. Consequently, the GBM Accreting Pulsars Program\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars.html\#}} (GAPP) was developed, with the aim of analyzing pulsars detected by GBM. In the context of the GAPP, two different pulse search strategies have been implemented: the daily blind search and the targeted (i.e. source-specific) search. The blind search consists of computing the daily fluxes for $26$ directions ($24$ equally spaced on the Galactic plane, plus the Magellanic Clouds), using CTIME data type. For each direction, a blind Fast-Fourier Transform (FFT) search is performed between $1\,$mHz and $2\,$Hz (and up to 40 Hz with CTTE data). This ensures sensitivity to new sources, new outbursts from known sources, and pulsars whose pulse period is poorly constrained. Typically, only the first three GBM CTIME channels are used for this search: channels 0 ($8-12\,$keV), 1 ($12-25\,$keV), and 2 ($25-50\,$keV). When a new source is detected through the blind search, its galactic longitude is interpolated from several directions, with the strongest signals in the power spectrum obtained by the FFT technique. The targeted search consists of an epoch-folding-based search over much smaller frequency ranges than the blind search method, which sometimes includes a search over the frequency derivative (see also Sect.~\ref{sec:timing}). This is applied to known sources, which provides a higher sensitivity due to source-specific information such as the location, orbital parameters, and flux spectrum. For each source, GAPP extracts the pulsed portion of the pulsar's signal (see Sect.~\ref{sec:timing}). However, the un-pulsed flux of a discrete source can be obtained by fitting the steps in count rates that occur when the source rises or sets over the Earth's horizon (see the GBM Earth Occultation Method --GEOM-- web page\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/earth_occ.html}.} and \citealt{Wilson-Hodge12}). The GAPP also inherited data from previous missions like the Burst and Transient Source Experiment (BATSE; \citealt{Fishman92,Bildsten1997}) on board the Compton Gamma Ray Observatory (CGRO; \citealt{Gehrels94}). One of the larger transient monitors in recent history, BATSE comprised eight NaI(Tl) large area detectors (LAD) each with 2025 cm$^2$ of geometric area \citep{Fishman92}. A plastic charged particle anticoincidence detector was in front of each LAD, resulting in a lower energy threshold of $\sim20$ keV. BATSE also included eight spectroscopy detectors that were not used for pulsar monitoring. The BATSE data consisted of nearly continuous time-binned data DISCLA (four channels, 1.024 s resolution) and CONT (16 channels, 2.048 s resolution). Generally, the first BATSE DISCLA channel, 20-50 keV, was used for pulsar monitoring. Comparatively, GBM detectors only have a Beryllium window in front and so they can reach $\sim 8$ keV, much lower than the BATSE LADs could. Despite the larger area of the BATSE detectors, the sensitivity is similar for GBM and BATSE for detecting outbursts of XRPs, due to the added low-energy response of the GBM detectors along with the abundance of photons from the sources at those energies. The pointing strategies for GBM and CGRO differ, with Fermi operating in a sky-scanning mode and CGRO operating using inertial pointing. This resulted in the need to incorporate the detector response in an earlier step in the data analysis process for GBM to account for changing angular response as the spacecraft scanned the sky. Both missions were in low-Earth orbit, at a similar inclination and altitude, resulting in similar energy-dependent background rates. Both missions used the data when a source was visible above the Earth's horizon for XRP analysis, resulting in similar exposure times of $\sim 40$ ks per day per source (depending on the source declination). The GAPP (see Sect.~\ref{sec:timing}) is based on the technique developed for BATSE \citep{Finger+99,Wilson-Hodge99,Wilson+02,Wilson03}, which measured pulsed frequencies and pulsed flux for a number of XRPs. Similarly to GBM, BATSE measured the un-pulsed flux for these sources using Earth occultation \citep{Harmon04}. We have now consolidated available BATSE data within the GAPP web pages\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars.html}} so as to provide the community with 20 yr of pulsar monitoring spanning the last 30 yr. We show that the combination of BATSE and GBM data, spanning over almost three decades, allows for the long-term study of XRPs that unveil otherwise unobservable phenomena. \section{Timing}\label{sec:timing} \subsection{Time corrections}\label{subsec:time_corr} Before delving into the timing analysis of each detected pulsar, the epoch of observed events need to be corrected. All recorded epoch times, $t$, are barycentered to remove the effects of the satellite and the Earth's revolutions, thus correcting the times as if the reference system is located at the center of mass of the solar system. This correction process returns a final epoch time, $t'$, that takes into account the following contributions: the reference time $t_0$, clock corrections $\Delta_{\rm clock}$ (which account for differences between the observatory clocks and terrestrial time standards), the Roemer delay $\Delta_{\rm R}$ (which accounts for the classical light travel time across the Earth's orbit), the Einstein delay $\Delta_{\rm E}$ (which accounts for the time dilation from the moving pulsar, observatory, and the gravitational redshift caused by the Sun and planets or the binary companion), and the Shapiro delay $\Delta_{\rm S}$ (which represents the extra time required by the pulses to travel through the curved space-time containing the solar system masses). Combining these termsn in the equation for the final epoch, we have \begin{equation*} t' = t - t_0 + \Delta_{clock} + \Delta_{R}+ \Delta_{E}+ \Delta_{S}. \end{equation*} If the orbit of the pulsar is known, a further correction is applied to the pulse arrival times, a correction known as orbital demodulation. The pulsar emission time, $t^{\rm em}$, is computed from the Barycentric Dynamical Time (TDB) $t'$, as $t^{\rm em} = TDB - z$, where $z$ is the line-of-sight delay associated with the binary orbit of the pulsar \citep{Deeter+81,Hilditch01}: \begin{equation}\label{eq:z_def} z = a_x\,sin\,i\,[ sin\,\omega\,(cos\,E-e)+\sqrt{(1-e^2)}\,cos\,\omega\,sin\,E\, ]. \end{equation} Here, $a_{\rm x}$ is the projected semi-major axis of the binary orbit, $i$ is the orbit's inclination relative to the plane of the sky, $w$ is the periastron angle, and $e$ is the binary orbit eccentricity, while $E$ is the eccentric anomaly as expressed in Kepler's equation \begin{equation} E - e\,sin\,E = \frac{2\pi}{P_{orb}}(t^{em} - \tau_p), \end{equation} where $P_{\rm orb}$ is the orbital period and $\tau_p$ is the periastron passage epoch. \subsection{The phase model} Pulsars represent excellent timing tools thanks to their very small moment of inertia, which allows precise measurements of the pulsar spin and spin derivative via a pulsar timing technique. This involves the regular monitoring of the rotation of the NS, by tracking the arrival times of individual observed pulses. For this, an average pulse profile is produced at any time to be used as a template, along with the assumption that any given observed profile is a phase-shifted and scaled version of the template. This is encoded in the evolution of the pulse phase as a function of time ($\phi(t)$). This \textit{pulse phase model} can be represented as a Taylor expansion around the reference time $t_{\rm 0}$ as \begin{equation}\label{eq:phasemodel} \phi(t) = \phi_0 + \nu_0(t - t_0) + \frac{1}{2}\dot{\nu}(t-t_0)^2 + ... \end{equation} where $\nu_{\rm 0} = \nu(t=t_{\rm 0})$ (and $\phi_{\rm 0}= \phi(t=t_{\rm 0})$), while $\dot\nu$ is the pulse frequency derivative. Pulsar timing deals with the determination of the pulse phase as accurately as possible in order to unambiguously establish the exact number of pulsar rotations between observations. By fitting Eq.~\ref{eq:phasemodel} to the frequencies determined by the means of the power spectra or epoch-folding method, a preliminary phase model is estimated. This allows us to produce pulse profiles and, at the same time, to reduce the amount of data and computing time. In turn, pulse profiles are used to refine the phase model to a higher precision (i.e., by phase-connection, see Sec.~\ref{subsec:Fourier} and Sec.~\ref{subsec:f_fdot}). To extract the periodic signal, two main methods are considered in this work: 1) the harmonic expansion, and 2) the search in frequency and frequency derivative. \subsection{Pulse profiles using a harmonic expansion}\label{subsec:Fourier} The pulsar periodic signal can be represented by a Fourier harmonic series: \begin{equation}\label{eq:countrate} m_k = \sum^{N}_{h=1}a_h cos\left\{2\pi\,h\,\phi(t_k)\right\} + b_h sin\left\{2\pi\,h\,\phi(t_k)\right\}, \end{equation} where $m_{\rm k}$ is the model count rate at time $t_{\rm k}$, $a_{\rm h}$ and $b_{\rm h}$ are the Fourier coefficients, $h$ is the harmonic number, $N$ is the number of harmonics, and $\phi(t_{\rm k})$ is the phase model. Similar to BATSE \citep{Bildsten1997,Finger+99}, six harmonics are typically used to represent the pulse profile. This results in a reasonable representation of all observed sources' pulse profiles, while the employment of additional harmonics does not improve the pulse structure significantly. To obtain the harmonic coefficients $a_h$ and $b_h$, a fit is performed to minimize the $\chi^2$ function: \begin{equation} \chi^2 = \sum^{M}_{k=1}\frac{\left\{x_k-(m_k+B_k)\right\}^2}{\sigma^2_{x_k}}, \end{equation} where $x_{\rm k}$ and $\sigma_{\rm k}$ are the measured count rates and errors, respectively, $M=2N$ is the number of statistically independent points, and $B_{\rm k}$ is the background (the un-pulsed count rate level). For this technique to work, a careful choice of the data length has to be considered. The interval needs to be short enough to guarantee that the phase model is not significantly changed between the beginning and the end of the observation, while at the same time, the interval needs to be long enough to include sufficient data, typically between 5 and 10 times the spin period of the measured source. Following \citet{Bildsten1997} and\citet{Woods07}, the GBM pulsed flux (F$_{\rm pulsed}$) is obtained as the root mean squared (RMS) pulsed flux: \begin{equation} F_{pulsed} = \sqrt{\sum^{N}_{h=1} \frac{ a_h^2 + b_h^2 - (\sigma_{a_h} ^2 + \sigma_{b_h} ^2)}{2} } \end{equation} where \begin{equation} \sigma_{a_h}^2 = \frac{4}{P^2}\sum^{P}_{i=1} \sigma_{r_i}^2 \cos^2{(2\pi\phi_i h)} \quad, \quad \sigma_{b_h}^2 = \frac{4}{P^2}\sum^{P}_{i=1} \sigma_{r_i}^2 \sin^2{(2\pi\phi_i h)} \end{equation} and $P$ is the total number of phase bins, and $\sigma_{\rm r_{\rm i}}$ is the uncertainty in the count rate in the $i$th phase bin. The spectral model used to combine the count rate of each source with the spectral response is an empirical model based on observations published in the literature. This procedure ensures that the pulsed flux is unbiased against the energy dependence of pulsed flux commonly observed in accreting XRPs, because the F$_{\rm pulsed}$ is calculated in relatively narrow energy bins, and the effect of an incorrectly assumed spectral model has been calculated to affect the derived flux only marginally, i.e. at a $\sim5\%$ level \citep{Wilson-Hodge12}. \subsection{Pulse profiles using a search in frequency and frequency derivative}\label{subsec:f_fdot} Due to the spin evolution shown by accreting pulsars, it is useful to apply a technique that not only estimates the spin frequency but also its derivative. Consequently, a search over a grid of pulse frequencies and frequency derivatives is performed to find the best-fitting value. The search range is often estimated using past measurements (depending on availability) otherwise, a safe interval of $\pm0.01\nu_{\rm 0}$ is used, where $\nu_{\rm 0}$ is the pulsar frequency estimated in Sect.~\ref{subsec:Fourier}). For an estimation of the pulsar frequency derivative range, a maximum spin-up rate is obtained from accretion theory (e.g., \citealt{Parmar+89}) assuming canonical NS parameters, \begin{equation}\label{eq:parmar} \dot\nu = 1.9\times10^{-12}\mu^{2/7}_{30}L^{6/7}_{37}\, \rm Hz\,s^{-1}, \end{equation} where $\mu_{\rm 30}$ is the magnetic moment of the NS in units of $10^{30}\,$G\,cm$^3$ and $L_{\rm 37}$ is the luminosity in units of $10^{37}\,$erg\,s$^{-1}$. Typical spin-down rates are of the order of a few times $10^{-13}\,$Hz\,s$^{-1}$. Eq.~\ref{eq:parmar} is considered applicable only on a limited range of relatively high-luminosity values \citep{Parmar+89}, a condition that is met for all analyzed sources when detected by GBM. Once the frequency and frequency derivative search ranges are established, a grid of phase offsets from the phase model in Eq.~\ref{eq:phasemodel} is created \begin{equation} \delta\phi_k(\delta\nu,\dot\nu_q) = \delta\nu_p(\bar{t}_k-\tau)+\frac{1}{2}\dot\nu_q(\bar{t}_k-\tau)^2. \end{equation} where $\bar{t}_{\rm k}$ is the time at the midpoint of segment $k$, $\tau$ is a reference epoch chosen near the center of the considered time interval (that is, at the epoch $\tau$, $\delta\phi = 0$ by definition), $\delta\nu_p$ is an offset in pulse frequency from $\nu_{\rm 0}$ in Eq.~\ref{eq:countrate}, and $\dot\nu_{\rm q}$ its derivative. Each offset in pulse phase leads to a a shift in the individual pulse profiles, applied as a modification of the estimated complex Fourier coefficient: \begin{equation} \beta_{kh}(\delta\nu,\dot\nu_q) = (a_{kh}-i\,b_{kh})\,{\rm exp}\left\{-i\,2\pi\,h\,\delta\phi(\delta\nu,\dot\nu_q)\right\}, \end{equation} where $a_{\rm kh}$ and $b_{\rm kh}$ are the harmonic coefficients for harmonic $h$ and profile $k$ from Eq.~\ref{eq:phasemodel}. The best frequency and frequency derivative within the search grid are determined using the $Y_n$ statistic, following \citet{Finger+99}. \subsection{Phase offset estimation and model fitting} Once pulse profile templates are obtained following the methods outlined in Sect.~\ref{subsec:Fourier} and Sect.~\ref{subsec:f_fdot}, a phase offset $\Delta\phi$ can be estimated by comparing the fitted pulse profiles with the obtained template. The $\Delta\phi$ and pulse amplitude $A$ of each pulse is then determined by fitting each pulse profile to the template ($T_h$) by minimization of \begin{equation} \chi^2 = \sum^{M}_{k=1}\frac{|\alpha_{kh} - A\,T_h\,exp(-i\,2\pi\,h\,\Delta\phi_k)|^2}{\sigma^2_{kh}}. \end{equation}{} Here, $\alpha_{kh} = a_{kh} - i\,b_{kh}$ is the complex Fourier coefficient for harmonic $h$ and profile $k$, and $\sigma^2_{\rm kh}$ is the error on the real or imaginary component of $\alpha_{\rm kh}$. Phase offsets are the signature that the observed spin frequency is modulated by some effect. If the offsets present a random, erratic behavior consistent with a constant value, then the phase model cannot be improved, and the offsets are considered noise. However, if the offsets show a (possibly periodic) pattern, the phase model can be improved by minimization of \begin{equation}\label{eq:phase-mod} \chi^2 = \sum^{M}_{k=1}\frac{(\phi(t^{em}_k) - \phi^{model}(t^{em}_k))^2}{\sigma^2_{\phi(t^{em}_k)}}, \end{equation}{} where $\phi(t^{\rm em}_{\rm k}))$ is the total measured pulse phase (the phase model used to fold the pulse profiles plus the measured offset), $\sigma^2_{\phi(t^{\rm em}_{\rm k})}$ is the error on $\phi(t^{\rm em}_{\rm k}))$, and $\phi^{model}(t^{\rm em}_{\rm k}))$ is the new phase model that has been used in the fit. Typically, the Levenberg-Marquardt method \citep{Press92} is used for the minimization of Eq.~\ref{eq:phase-mod}. Such a process constrains the pulsar binary orbit by considering $t^{\rm em}=TDB - z$, remembering that TDB is the Dynamical Barycentric Time and $z$ is the line-of-sight delay associated with the binary orbit (see Sect.~\ref{subsec:time_corr} and Eq.~\ref{eq:z_def}). \section{Overview of Accreting X-Ray Pulsars}\label{sec:overview} XRPs that are part of binary systems can be classified into two groups, according to the mass of the donor star \citep{Lewin97}: \begin{itemize} \item High Mass X-ray Binaries (HMXBs) are systems where the donor star is a massive O or B stellar type, typically with $M\geq5\,M_\odot$. The system is generally younger, and the stellar wind is strong. When the compact object is a NS, its magnetic field is of the order of $10^{12}\,$G. In our Galaxy, these objects are mostly found on the Galactic plane, especially along the spiral arms. \item Low Mass X-ray Binaries (LMXBs) are systems where the donor star is a spectral type A or later star, or a white dwarf with a mass of $M\leq1.2\,M_\odot$. These systems are generally older than HMXBs, with weaker stellar winds from the donor. The NS magnetic field observed in these systems has decayed to about $10^{7-8}\,$G. Moreover, LMXBs are typically found toward the Galactic center; although, some of them have been observed in globular clusters. \end{itemize} XRPs are largely found in HMXBs. In fact, GBM detected XRPs are almost exclusively HMXBs. Depending on the binary system properties, three different methods of mass transfer can take place in X-ray binaries: \begin{itemize} \item[1.] \textit{Wind-fed systems}: Accretion from stellar winds is particularly relevant when the donor star is a massive main-sequence or supergiant O/B star, because those winds are dense, with mass-loss rates of $\dot{M}_{\rm w}\approx 10^{-6}-10^{-7}\,M_\odot\,$yr$^{-1}$. Typically, in wind-fed systems, the compact object orbits the donor star at a close distance, thus, being deeply embedded in the stellar wind, accreting at all orbital phases. These systems are therefore \textit{persistent} sources, showing variability on a timescale that is much shorter than the orbital period (i.e., $10^2-10^4\,$s). \item[2.] \textit{Roche lobe-overflow (RLO) systems}: When the binary system is such that the donor star radius is larger than its Roche lobe, the star loses part of its material through the first Lagrangian point $L_1$. When RLO takes place, the mass flow does not directly impact the compact object due to the intrinsic orbital angular momentum of the transferred material. Instead, it forms an accretion disk around the compact object. Since the transfer of matter is generally steady, RLO systems are also persistent sources. \begin{figure*}[!t] \includegraphics[width=1.\textwidth]{corbet_plot.pdf} \caption{The Corbet diagram showing spin period (y-axis) versus orbital period (x-axis) for all GBM detected accreting XRPs with known orbital period. A few representative sources have been labelled. The region populated by accreting millisecond pulsars (grey oval) has also been labelled for comparison.} \label{fig:corbet} \end{figure*} \item[3.] \textit{Be/X-Ray Binary systems (BeXRBs)}: In these systems, the donor star is an O- or B-type star that expels its wind on the equatorial plane under the form of a circumstellar disk (also called the Be disk). The disk is composed of ionized gas that produces emission lines (especially H$\alpha$). When the orbiting compact object (CO) passes close to or through the Be disk, a large flow of matter is pulled from the disk, forming an accretion disk around the CO by its gravitational potential. Subsequently, matter is then accreted onto the CO giving rise to an X-ray outburst. Due to the orbital modulation of the accreted matter, these systems show only \textit{transient} activity. X-ray outbursts in BeXRBs are classified into two types: \begin{itemize} \item{Type I}: also called \textit{normal} outbursts. These are less luminous outbursts, with a peak luminosity of $\sim10^{36-37}\,$erg\,s$^{-1}$, occurring typically at periastron passages and lasting for a fraction of the orbital period. \item{Type II}: also called \textit{giant} outbursts. These episodes are more rare, more luminous (peak luminosity of $\sim10^{37-38}\,$erg\,s$^{-1}$), and do not show any preferred orbital phase, lasting for a large fraction of the orbital period or even for several orbits. \end{itemize} \end{itemize} Despite the aforementioned classifications, the zoo of XRPs often shows systems that have properties belonging to different classes and are characterized by mixed mass transfer modalities. For example, theoretical and observational works show that wind-captured disks can form around the CO of certain HMXBs (see, e.g., \citealt{Jenke2012,Blondin13,ElMellah19}). Moreover, the recent discovery of new systems led to the classification of additional subclasses, e.g., the HMXBs with supergiant companions (SgXBs), and the Super-giant Fast X-ray Transients (SFXTs; \citealt[and references therein]{Sidoli18}). However, different subclasses may also represent similar systems observed at different accretion regimes or at different evolutionary stages. For example, gated accretion models are invoked to explain the variable activity of SFXTs, where the transitions between possible regimes are triggered by the inhomogeneous (i.e., clumpy) ambient wind \citep[and references therein]{Bozzo16, MartinezNunez17, Pradhan18}. All GBM detected XRPs and relevant properties are summarized in Table~\ref{tab:summary}. The different classes for these sources are shown in Fig.~\ref{fig:corbet}, where they are plotted in the Corbet diagram \citep{Corbet1986}. \begin{longrotatetable} \begin{deluxetable}{lccccccccccc} \tabletypesize{\scriptsize} \tablecaption{GBM X-Ray Pulsars: Coordinates, Orbital Elements, Spin Periods and Distances.\label{tab:summary}} \tablewidth{\textwidth} \tablehead{ \colhead{Source} & \colhead{Class} & \colhead{R.A.} & \colhead{Decl.} & \colhead{$P_{\rm orb}$} & \colhead{$P_{\rm spin}$} & \colhead{$T_{\pi/2}^*$} & \colhead{$a_{\rm x}$sin\,$i$} & \colhead{$w$} & \colhead{$e$} & \colhead{$d^{\dagger}$}\\ \colhead{} & \colhead{} & \colhead{($^\circ$)} & \colhead{($^\circ$)} & \colhead{(days)} & \colhead{(s)} & \colhead{(MJD)} & \colhead{(l s)} & \colhead{($^\circ$)} & \colhead{} & \colhead{(kpc)} } {\startdata{ & & & & & $Transient$ & & & & & \\ GRO J1744--28\tablenotemark{a} & LMXB-RLO & 266.1379 & -28.7408 & 11.8358(5) & 0.46704631 & 56692.739(2) & 2.639(1) & 0.00 & $<6\times10^{-3}$ & $8.5^{+2.0}_{-4.5}$\, [\tablenotemark{b}]\\ SAX J2103.5+4545\tablenotemark{a} & BeXRB & 315.8988 & 45.7515 & $12.66528(51)$ & 358.61 & $52545.411(24)$ & $80.81(67)$ & $241.36(2.18)$ & $0.401(18)$ & $6.4^{+0.9}_{-0.7}$\\ 4U 1901+03\tablenotemark{a} & BeXRB & 285.9047 & 3.1920 & 22.5348(21) & 2.76179 & 58563.8361(8) & 106.989(15) & 268.812(3) & 0.0363(3) & $2.2^{+2.2}_{-1.3}$ \\ RX J0520.5--6932\tablenotemark{a} & BeXRB & 80.1288 & -69.5319 & 23.93(7) & 8.037 & 56666.41(3) & 107.6(8) & 233.50 & 0.0286 & LMC \\ A 1118--615\tablenotemark{a} & BeXRB & 170.2408 & -61.9161 & 24.0(4) & 407.654 & 54845.37(10) & 54.8(1.4) & 310(30) & 0.10(2) & $2.93^{+0.26}_{-0.22}$\\ 4U 0115+63\tablenotemark{a} & BeXRB & 19.6329 & 63.7400 & 24.316895 & 3.61 & 57963.237(3) & 141.769(72) & 49.51(9) & 0.3395(2) & $7.2^{+1.5}_{-1.1}$ \\ Swift J0513.4--6547\tablenotemark{a} & BeXRB & 78.3580 & -65.7940 & 27.405(8) & 27.28 & 54899.02(27) & 191(13) & ... & $<0.17$ & LMC\\ Swift J0243.6+6124\tablenotemark{a} & BeXRB & 40.9180 & 61.4341 & 27.587(17) & 9.86 & 58103.129(17) & 115.84(32) & -73.56(16) & 0.09848(42) & $6.9^{+1.6}_{-1.2}$\\ GRO J1750--27\tablenotemark{a} & BeXRB & 267.3046 & -26.6437 & 29.803890 & 4.45 & 49931.02(1) & 101.8(5) & 206.3(3) & 0.360(2) & $18.0^{+4.0}_{-4.0}$ [\tablenotemark{b}]\\ Swift J005139.2--721704\tablenotemark{a} & BeXRB & 12.9116 & -72.284666 & 20-40 & 4.8 & ... & ...& ... & ... & SMC \\ 2S 1553--542\tablenotemark{a} & BeXRB & 239.4542 & -54.4150 & 31.34(1) & 9.29 & 57088.927(4) & 201.48(25) & 164.8(1.2) & 0.0376(9) & $20.0^{+4.0}_{-4.0}$ [\tablenotemark{b}] \\ V 0332+53\tablenotemark{a} & BeXRB & 53.7495 & 53.1732 & 33.850(3) & 4.37 & 57157.38(5) & 77.8(2) & 277.4(1) & 0.371(5) & $5.1^{+1.1}_{-0.8}$ \\ XTE J1859+083\tablenotemark{a} & BeXRB & 284.7700 & 8.2500 & 37.97 & 10.0 & 57078.7 & 57100.5(5) & 211.4(1.8) & -117.0(0.9) & $2.7^{+2.4}_{-1.5}$ \\ KS 1947+300\tablenotemark{a} & BeXRB & 297.3979 & 30.2088 & 40.415(10) & 18.81 & 51985.31(7) & 137(3) & 33(3) & 0.033(13) & $15.2^{+3.7}_{-2.8}$ \\ 2S 1417--624\tablenotemark{a} & BeXRB & 215.3000 & -62.7000 & 42.19(1) & 17.51 & 51612.17(5) & 188(2) & 300.3(6) & 0.446(2)& $3.8^{+2.8}_{-1.8}$ \\ SMC X-3\tablenotemark{a} & BeXRB & 13.0237 & -72.4347 & 45.04(8) & 7.81 & 57676.4(3) & 190.3(1.3) & 240.3(1.1) & 0.244(5) & SMC \\ EXO 2030+375\tablenotemark{a} & BeXRB & 308.0633 & 37.6375 & 46.0213(3) & 41.33 & 52756.17(1) & 246(2) & 211.9(4) & 0.410(1) & $3.6^{+1.4}_{-0.9}$ \\ MXB 0656--072\tablenotemark{a} & BeXRB & 104.6125 & -7.2633 & 101.2 & 160.7 & ... & ... & ... & $0.4$\tablenotemark{b} & $5.1^{+1.4}_{-1.0}$ \\ GS 0834--430\tnote{a} & BeXRB & 128.9792 & -43.1850 & 105.8(4) & 12.3 & ... & ... & ... & (0.10-0.17) & $5.5^{+2.5}_{-1.7}$\\ GRO J2058+42\tablenotemark{a} & BeXRB & 314.6987 & 41.7743 & 110(3) & 193.6 & ... &... & ... & ...& $8.0^{+1.2}_{-1.0}$ \\ A 0535+26\tablenotemark{a} & BeXRB & 84.7274 & 26.3158 & 111.1(3) & 103.5 & 49156.7(1.0) & 267(13) & 130(5) & 0.42(2) & $2.13^{+0.26}_{-0.21}$ \\ IGR J19294+1816\tablenotemark{a} & BeXRB & 292.4829 & 18.3107 & 117.2 or 22.25\tablenotemark{b} & 12.45 & ... & ... & ... & ... & $2.9^{+2.5}_{-1.5}$ \\ GX 304--1\tablenotemark{a} & BeXRB & 195.3213 & -61.6018 & 132.18900 & 272.0 & 55425.6(5) & 601(38) & 130(4) & 0.462(19) & $2.01^{+0.15}_{-0.13}$ \\ RX J0440.9+4431\tablenotemark{a} & BeXRB & 70.2472 & 44.5304 &150.0(2) & 202. & ... & ... & ... & $>0.4$\tablenotemark{b} & $3.2^{+0.7}_{-0.5}$ \\ XTE J1946+274\tablenotemark{a} & BeXRB & 296.4140 & 27.3654 & 172.7(6) & 15.7497 & 55515.0(1.0)& 471.2(4.3) & -87.4(1.7) & 0.246(9) & $12.6^{+3.9}_{-2.9}$ \\ 2S 1845--024\tablenotemark{a} & BeXRB & 282.0738 & -2.4203 & 242.180(12) & 94.6 & 49616.48(12) & 689(38) & 252(9) & 0.8792(54) & $10.0^{+2.5}_{-2.5}$ [\tablenotemark{b}]\\ GRO J1008--57\tablenotemark{a} & BeXRB & 152.4420 & -58.2933 & 249.48(4) & 93.713 & 54424.71(20) & 530(60) & -26(8) & 0.68(2) & $5.8^{+0.5}_{-0.5}$\, [\tablenotemark{b}] \\ Cep X-4\tablenotemark{a} & BeXRB & 324.8780 & 56.9861 & (23-147) & 66.3 & ... & ... & ... & ... & $10.2^{+2.2}_{-1.6}$ \\ IGR J18179--1621\tablenotemark{a} & HMXB & 274.4675 & -16.3589 & -- & 11.8 & ... & ... & ... & ... & $8.0^{+2.0}_{-7.0}$ [\tablenotemark{b}] \\ MAXI J1409--619\tnote{a} & BeXRB? & 212.0107 & -61.9834 & -- & 50 & ... & ... & ... & ... & $14.5^{+2.0}_{-2.0}$ [\tablenotemark{b}] \\ XTE J1858+034\tablenotemark{a} & BeXRB & 284.6780 & 3.4390 & ... & 221.0 & ... & ... & ... & ... & $1.55^{+0.28}_{-0.21}$ \\ \hline & & & & & $Persistent$ & & & & & \\ 4U 1626--67\tablenotemark{a} & LMXB - RLO & 248.0700 & -67.4619 & 0.02917(3) & 7.66 & ... & ... & ... & ... & $3.5^{+2.3}_{-1.3}$ \\ Her X-1\tablenotemark{a} & LMXB - RLO & 254.4571 & 35.3426 & 1.700167590(2) &1.237& 46359.871940(6)&13.1831(4)&96.0(10.0)& 4.2(8)E-4 & $5.0^{+0.8}_{-0.6}$ \\ Cen X-3\tablenotemark{a} & sgHMXB - RLO + wind &170.3133&-60.6233&2.08704106(3)&4.8& 50506.788423(7) &39.6612(9)&--&$<0.0001$ & $6.4^{+1.4}_{-1.1}$ \\ 4U 1538--52\tablenotemark{a} & sgHMXB - wind & 235.5971 & -52.3861& 3.7284140(76) & 526.8& 52 855.061(13) & 53.1(1.5) & 40(12) & 0.17(1) & $6.6^{+2.2}_{-1.5}$ \\ Vela X-1\tablenotemark{a} & sgHMXB - wind & 135.5286 & -40.5547 & 8.964427(12) & 83.2 & 42 611.349(13) & 113.89(13) & 152.59(92)& 0.0898(12) &$2.42^{+0.19}_{-0.17}$ \\ OAO 1657--415\tablenotemark{a} & sgHMXB - wind & 255.2038 & -41.6560 & 10.447355(92) & 37.1 & 52674.1199(17) & 106.157(83) & 92.69(67) & 0.1075(12) & $7.1\pm1.3$ [\tablenotemark{b}] \\ GX 301-2\tablenotemark{a} & hgHMXB - wind & 186.6567 & -62.7703 & 41.506(3) & 684.1618 & 53532.15000 &368.3(3.7) & 310.4(1.4) & 0.462(14) & $3.5^{+0.6}_{-0.5}$ \\ GX 1+4\tablenotemark{a} & LMXB & 263.0128 & -24.7456 & 1160.8(12.4) & 159.7 & 51942.5(53.0) & 773(20) & 168(17) & 0.101(22) & $7.6^{+4.3}_{-2.8}$ \\ }\enddata} \tablecomments{Sources are listed from top to bottom in order of increasing orbital period. $^*$Mid-eclipse time, equivalent to the time when the mean longitude $l=\pi/2$ for a circular orbit; $^{\dagger}$Distances obtained from the second Gaia Data Release (DR2) (unless specified otherwise); Large and Small Magellanic Clouds (LMC and SMC) are considered at 50 and 62 kpc, respectively. Spectral models, orbital parameters and distances for targets unavailable in the \textit{Gaia} DR2 are obtained for each source from the following works:} \begin{tablenotes}[para] \small\medskip \item[J1744a]{\citet{Sanna17};} \item[J1744b]{\citet{Nishiuchi99};} \item[J2103] {\citet{Camero07};} \item[4U1901] {\citet{Galloway+05,Jenke+Finger11}, and this work;} \item[J0520] {\citet{Kuehnel14};} \item[A1118] {\citet{Staubert11};} \item[4U0115] {This work;} \item[J0513] {\citet{Coe15};} \item[J0243] {\citet{Jenke18};} \item[J1750a] {\citet{Scott97} with period correction from GBM data;} \item[J1750b] {\citet{Lutovinov19};} \item[J005139] {\citet{Laycock2003};} \item[2S1553a] {This work;} \item[2S1553b] {\citet{Tsygankov16};} \item[V0332] {\citet{Doroshenko+16};} \item[J1859] {\citet{Kuehnel+16};} \item[KS1947] {\citet{Galloway+04};} \item[2S1417] {\citet{Finger+96,Inam+04};} \item[SMCX-3] {\citet{Townsend+17};} \item[EXO2030] {\citet{Wilson+08};} \item[MXB 0656a] {\citet{Morgan+03};} \item[MXB 0656b] {\citet{Yan12};} \item[GS0834] {\citet{Wilson97};} \item[J2058] {\citet{Wilson+98};} \item[A0535] {\citet{Finger0535};} \item[J19294a] {\citet{Corbet+Krimm09};} \item[J19294b] {\citet{Cusumano+16};} \item[GX304] {\citet{Sugizaki+15 } \item[J0440a] {\citet{Ferrigno+13};} \item[J0440b] {\citet{Yan+16};} \item[J1946a] {\citet{Marcu+15};} \item[J1946b] {\citet{Orlandini12};} \item[2S1845a] {\cite{Finger+99};} \item[2S1845b] {\cite{Koyama90};} \item[J1008a] {\citet{Coe+07,Kuehnel+13};} \item[J1008b] {\citet{Riquelme+12};} \item[CepX4] {\citet{Wilson99};} \item[J18179a] {\citet{Halpern12};} \item[J18179b] {\citet{Nowak12}; } \item[J1409a] {\citet{Kennea+10};} \item[J1409b] {\citet{Orlandini12};} \item[J1858] {\citet{Remillard98};} \item[4U1626] {\citet{Chakrabarty98};} \item[Her X-1] {\citet{Staubert+09}. \item[Cen X-3] {\citet{Raichur+Paul10, Falanga+15};} \item[4U1538] {\citet{Falanga+15,Clark00};} \item[Vela X-1] {\citet{Bildsten1997, Kreykenbohm+08, Falanga+15}; \item[OAO1657a] {\citet{Jenke+12,Falanga+15}; \item[OAO1657b] {\citet{Audley06};} \item[GX301] {\citet{Sato+86, Koh+97, Doroshenko+09}; \item[GX1+4] {\citet{Hinkle+06}.} \end{tablenotes} \end{deluxetable} \end{longrotatetable} \section{Individual Accreting X-Ray Pulsars observed by GBM}\label{sec:individuals} There are 39 sources in total, 31 transient systems, and eight persistent systems, with frequency and pulsed flux histories available on the GAPP public website\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars.html\#}}. For each source, we link the corresponding GAPP web page for the reader's convenience. The main properties of each source are listed in Table~\ref{tab:summary}, along with their distance values as measured either by the Gaia mission \citep{Bailer-Jones18} following the method described in Appendix~\ref{sec:gaiadist} or as otherwise specified. \subsection{Transient Outbursts in BeXRB systems} Most GBM detected XRPs are BeXRB systems. Among the transient systems, pulsations from 28 BeXRBs, 1 possible BeXRB, 1 HXMB (with no better subclassification), and 1 LMXB are observed with GBM. Below, we describe the main timing properties of each transient XRB detected by GBM. \subsubsection{GRO J1744--28} GRO J1744--28 is the fastest accreting X-ray pulsar, with a spin period of only $\sim44\,$ms, discovered with BATSE \citep{Finger+96}. This source is also known as the \emph{Bursting Pulsar}, due to the fact that it shows Type II-like bursting activity, usually attributed to thermonuclear burning, but it is possibly due to accretion processes in GRO J1744--28 \citep[and references therein]{Court18}. It is the only LMXB among the transient systems detected by GBM. It has an orbital period of about $12\,$days, and its distance is calculated as $\sim8.5\,$kpc in \citet{Kouveliotou96} and \citet{Nishiuchi99}, but it is challenged by the value of $\sim4\,$kpc obtained from studies of its near-infrared counterpart, a reddened K2 III giant star \citep{Gosling17, Wang07,Masetti14}. On the other hand, the closest Gaia counterpart is located at $14.0\arcsec$ from the nominal source position and at a distance of $1.3^{+1.2}_{-0.5}\,$kpc. The activity observed from GRO J1744--28 is limited to three episodes: the Type II outburst that led to its discovery in 1995 \citep{Kouveliotou96}, the outburst that occurred in 1997 \citep{Nishiuchi99}, and the last one in 2014, which followed almost two decades of quiescence (\citealt{Dai15,Sanna17}, and references therein). The spin-up rate observed during the outbursts is of the order of $10^{-12}\,$Hz\,s$^{-1}$, while the secular spin-up trend shows an average rate of about $2\times10^{-13}\,$Hz\,s$^{-1}$ \citep{Sanna17}. GBM also measuredGBM also measured an average value of the spin derivative of $\sim3\times10^{-12}\,$Hz\,s$^{-1}$ an average value of the spin derivative of $\sim3\times10^{-12}\,$Hz\,s$^{-1}$ during the 2014 outburst\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/groj1744.html}.}. Comparisons with archival BATSE data show a marginal long-term spin-up trend with an average rate of $\dot{\nu}\sim1\times10^{-14}\,$Hz\,s$^{-1}$. \subsubsection{SAX J2103.5+4545} SAX J2103.5+4545 was discovered by BeppoSAX as a transient accreting pulsar with a spin period of $\sim360\,$s \citep{Hulleman98}. With an orbital period of $\sim$13~days, it is amongst the shortest known for a BeXRB~\citep{Baykal07}. The Gaia distance for this source is $6.4^{+0.9}_{-0.7}\,$kpc, consistent with the distance value obtained from optical observations of the B0 Ve companion star ($6.5\,$kpc; \citealt{Reig04,Reig10}). Although SAX J2103.5+4545 has been classified as a BeXRB \citep{Reig04}, it does not follow the Corbet $P_{\rm orb}$--$P_{\rm spin}$ correlation, but it is located in the region of wind accretors (see Fig.~\ref{fig:corbet}). Since its discovery, numerous Type I and Type II outbursts have been observed \citep{Camero07}. Since then, SAX J2103.5+4545 has been showing a general spin-up trend at different rates\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/saxj2103.html}.} but with an average value of $\dot{\nu}\approx10^{-12}\,$Hz\,s$^{-1}$ \citep{Camero07}. Those authors also observe a spin-up rate steeper than the expected power-law correlation, with a 6/7 index as reported in Equations~(\ref{eq:torque2}) and (\ref{eq:parmar}) (see Fig. 13 in their work). During outburst episodes, the measured spin-up rate is $\dot{\nu}=-2.6\times10^{-12}\,$Hz\,s$^{-1}$ \citep{Ducci08}. However, long ($\sim$yr) spin-down periods have also been observed between outbursts, with $\dot{\nu}=4.2\times10^{-14}\,$Hz\,s$^{-1}$ \citep{Ducci08}. \subsubsection{4U 1901+03} 4U 1901+03 was first detected in X-rays by the Uhuru mission in 1970--1971 \citep{Forman76}. Afterwards, the source remained undetected until 2003, when it underwent a Type II outburst that lasted for about 5 months and during which pulsations were detected at a spin period of about $3\,$s \citep{Galloway+05}. The orbital period is $\sim23\,$days \citep{Galloway+05,Jenke+Finger11}. The optical companion stellar type was uncertain until recent measurements were obtained by \citet{McCollum19}, who proposed a B8/9 IV star, which is consistent with the X-ray timing analysis that favors a BeXRB nature \citep{Galloway+05}. The Gaia measured distance is $2.2^{+2.2}_{-1.3}\,$kpc, much closer than the initially proposed distance of ${\sim10}\,$kpc \citep{Galloway+05}. However, optical spectroscopy of the optical companion, together with the separation between the \textit{Gaia} measurement and the Chandra derived position for this source \citep{Halpern19}, led \citet{Strader19} to favor a distance $>12\,$kpc for this system. After the Type II outburst in 2003, the source has remained mostly quiescent, showing moderate activity in 2011 December \citep{Jenke+Finger11,Sootome11}, when a weak flux increase was observed, accompanied by a spin-up trend. The spin-up observed during the Type II outburst in 2003 was $2.9\times10^{-11}\,$Hz\,s$^{-1}$ \citep{Galloway+05}. More recently however, the source underwent another Type II outburst \citep{Kennea19,Nakajima19}. The GBM spin-up average rate\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/4u1901.html}.} measured during the 2019 outburst episode was $1.4\times10^{-11}\,$Hz\,s$^{-1}$, similar to that of the previous Type II outburst. GBM also observed the source slowly spinning down between outbursts at an average rate of $4.2\times10^{-13}\,$Hz\,s$^{-1}$. \subsubsection{RX J0520.5--6932} RX J0520.5--6932 was discovered with ROSAT \citep{Schmidtke94}. The only pulsations were detected two decades later, when a Swift/XRT survey of the LMC in 2013 revealed RX J0520.5--6932 to have undergone an X-ray outburst, and XMM-Newton observations found a spin period of about $8\,$s \citep{Vasilopoulos14}. The orbital period is $\sim24\,$days \citep{Coe01,Kuehnel14}. The optical counterpart is an O9 Ve star \citep{Coe01}, and the source is located in the LMC ($\sim50\,$kpc). The outburst observed in 2013 was the the first and only one since its discovery \citep{Vasilopoulos13b}. During that episode, a strong spin-up trend was observed by GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/rxj0520.html}.}, at a rate of $\dot{\nu}=3.5\times10^{-11}\,$Hz\,s$^{-1}$. \subsubsection{A 1118--616} X-ray pulsations with a period of $406.5\,$s were discovered in A 1118--616 by Ariel 5. Initially interpreted as the binary period \citep{Ives75}, it was later identified as the pulsar spin period \citep{Fabian75,Fabian76}. The first determination of the orbital period of $24\,$days was obtained later by \citet{Staubert11}. The optical companion is an O9.5 IV-Ve star, Hen 3-640/Wray 793 \citep{Chevalier75}, and the Gaia measured distance for this system is $2.9^{+0.3}_{-0.2}\,$kpc (although, other works locate it at about $5.2\,$kpc; \citealt{Janot81,Riquelme+12}). Its outburst activity is sporadic, with only three major outbursts since its discovery (see \citealt{Suchy11}, and references therein). The average spin-up rate observed during accretion is of the order of $(2-4)\times10^{-13}\,$Hz\,s$^{-1}$ (see, e.g., \citealt{Coe1A94}, and the relevant GAPP web page\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/a1118.html}.}), while the secular trend between outbursts is a spin-down rate of about $-9.1\times10^{-14}\,$Hz\,s$^{-1}$ \citep{Mangano09, Doroshenko10A}. After the 2011 outburst, the source entered a quiescent period that is still ongoing at the time of writing, remaining undetected with GBM. \subsubsection{4U 0115+634} Pulsations at $\sim4\,$s from 4U 0115+634 were discovered by SAS-3 in 1978 \citep{Cominsky78}. The orbital period is $24\,$days \citep{Rappaport78}, and the optical companion is V635 Cas, a B0.2 Ve star. It has a \textit{Gaia} measured distance of $7.2^{+1.5}_{-1.1}\,$kpc, consistent with the approximate value of $\sim7\,$kpc inferred by \citet{Negueruela01a} and \citet{Riquelme+12}. 4U 0115+634 shows frequent outburst activity, with Type II outbursts observed as often as Type I outbursts, at a quasi-periodicity of $3-5\,$years \citep{Negueruela01a,Negueruela01b}. The general spin period evolution trend shows spin-down during quiescence, as well as between outbursts. However, rapid spin-up episodes are observed during Type II activity, $\dot{\nu}\sim2.3\times10^{-11}\,$Hz\,s$^{-1}$ \citep{Li12}. This resulted in a secular spin-up trend \citep{Boldin13}. However, the secular trend has recently inverted, and the source started to show long-term spin-down as observed by GBM\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/4u0115.html}.}. \subsubsection{Swift J0513.4--6547} Swift J0513.4--6547 was discovered by Swift during an outburst and identified as a pulsar with a spin period of $28\,$s in the same observation \citep{Krimm09}. The outburst lasted for about 2 months, after which the source entered quiescence interrupted only by a moderate re-brightening in 2014, when it showed a luminosity of the order of $10^{36}\,$erg\,s$^{-1}$ \citep{Sturm14,Sahiner16}. The system is located in the LMC, and the optical companion is a B1 Ve star \citep{Coe15}. The peak spin-up rate observed by GBM\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/swiftj0513.html}.} during the 2009 outburst is about $3\times10^{-10}\,$Hz\,s$^{-1}$ \citep{Finger+Beklen09, Coe15}, while during the quiescent period between 2009 and 2014, the source was spinning down at an average rate of $-1.5\times10^{-12}\,$Hz\,s$^{-1}$~\citep{Sahiner16}. \subsubsection{Swift J0243.6+6124} Swift J0243.6+6124 is the newest discovered source in the present catalog and among the brightest. It was first discovered by Swift and then independently identified as a pulsar by Swift and GBM, with a spin period of about $10\,$s \citep{Jenke2017, Kennea17}. The orbital period is $\sim27\,$days \citep{Jenke18}, and the optical counterpart is a late Oe-type or early Be-type star \citep{Bikmaev17}, with a Gaia measured distance of $6.9^{+1.6}_{-1.2}\,$kpc. Following its discovery, the source entered a Type II outburst that lasted for $\sim150\,$days, becoming the first known galactic Ultra-Luminous X-ray (ULX) pulsar, with a peak luminosity of about $2\times\,10^{39}\,$erg\,s$^{-1}$ \citep{Wilson18}. During the outburst episode, the source showed dramatic spin-up at a maximal rate of $\sim2\times10^{-10}\,$Hz\,s$^{-1}$ \citep{Doroshenko17}. After the Type II outburst, the source kept showing weaker X-ray activity for a few of the following periastron passages\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/swiftj0243.html}.}. During these later passages, the spin-down rate of the source was about 100 times slower ($\sim-2\times10^{-12}\,$Hz\,s$^{-1}$), even at an accretion luminosity of a few $10^{36}\,$erg\,s$^{-1}$ \citep{Doroshenko19,Jaisawal19}. Only during the last exhibited outburst did the source show a spin-up trend again at a rate comparable to the previously observed spin-up phase. Currently, the source remains quiescent. \subsubsection{GRO J1750--27} Pulsations at $4\,$s from GRO J1750--27 were observed by BATSE during the same outburst that led to its discovery \citep{Wilson95,Scott97}. The orbital period is about $30\,$days, and the system is located at a distance of $\sim18\,$kpc \citep{Scott97, Lutovinov19}, with no Gaia DR2 counterpart (but with a DR1 solution of $1.4^{+1.9}_{-0.5}\,$kpc). No optical counterpart has been identified yet due to the location of the system beyond the Galactic center. However, following the classification of \citet{Corbet1986}, \citet{Scott97} identified GRO J1750--27 as a BeRXB. GRO J1750--27 shows only sporadic outburst activity, with only three outbursts detected since its discovery and only one observed by GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/groj1750.html}.}. Local spin-up trends during these outbursts have been observed at a rate of about $1.5\times10^{-11}\,$Hz\,s$^{-1}$ \citep{Shaw09, Lutovinov19}. The source does not show any appreciable spin derivative during quiescent periods. \subsubsection{Swift J005139.2--721704} Pulsations at about 4.8 s were first discovered from the SMC source XTE J0052--723 with RXTE \citep{Corbet01}. This source has recently been identified as coincident with Swift J005139.2--721704 in the SMC \citep{Strohmayer+2018} and is listed on the GAPP website with this name\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/swiftj005139.html}.}. \citet{Laycock2003} identified the source as a BeXRB, inferring its orbital period as $\sim20-40\,$days based on its pulsation period and its possible location on the Corbet diagram. Pulsations from this source were observed with GBM only once, during its recent outburst in 2018. This represented only the second outburst ever observed from this source \citep[and references therein]{Monageng19}. These authors reported the source to show unusual spin-down trends during accretion, which may be due to orbital modulation. \subsubsection{2S 1553--542} Pulsations at $\sim9\,$ s from 2S 1553--542 were discovered by SAS-3 \citep{Kelley82}. The orbital period is about $31\,$days \citep{Kelley83}, and the optical companion has been identified as a B1-2V type star \citep{Lutovinov16}. The closest counterpart measured by Gaia is located at an angular offset of $5\arcsec.5$ from the nominal source position, at a distance of $3.5^{+2.6}_{-1.5}\,$kpc. However, a distance of $20\pm4\,$kpc has been reported by \citet{Tsygankov16} based on the assumption of accretion-driven spin-up. Since its discovery, the source has exhibited three outbursts, all of which were Type II (see \citealt{Tsygankov16}, and references therein). This behavior is interpreted in terms of the low eccentricity ($e\sim0.035$) of the binary orbit \citep{Okazaki01}. Local spin-up rates during accretion episodes were measured as $2.9\times10^{-11}\,$Hz\,s$^{-1}$ for the 2008 outburst \citep{Pahari12}, and $8.7\times10^{-12}\,$Hz\,s$^{-1}$ for the 2015 outburst\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/2s1553.html}.}. The spin-down rate measured between these outbursts is about $-4.0\times10^{-13}\,$Hz\,s$^{-1}$ \citep{Tsygankov16}. \subsubsection{V 0332+53} Pulsations at $\sim4\,$s from V 0332+53 were detected by the EXOSAT satellite \citep{Stella85}. The same observations revealed a moderately eccentric orbit ($e\sim0.3$) and an orbital period of about $34\,$days. The optical companion is an O8-9 Ve star, BQ Cam \citep{Honeycutt85,Negueruela99}, and the system distance was first estimated to be $2.2-5.8\,$kpc \citep{Corbet86}. This was later increased to $6-9\,$kpc \citep{Negueruela99}. Both findings are consistent with a Gaia measured distance of $5.1^{+1.1}_{-0.8}\,$kpc. Since its discovery, the source has shown four Type II outbursts, each one lasting for a few orbital periods and reaching peak luminosities of $\sim10^{38}\,$erg\,s$^{-1}$ (see \citealt{Doroshenko+16}, and references therein). The spin-up rate measured during outburst episodes is $(2-3)\times10^{-12}\,$Hz\,s$^{-1}$ \citep{Raichur10}. However, as outburst activity from this source is relatively rare, the net secular spin derivative trend shows a slow spin-down\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/v0332.html}.}, $\sim-5\times10^{-14}\,$Hz\,s$^{-1}$. \subsubsection{XTE J1859+083} Pulsations from XTE J1859+083 at ${\sim}10\,$s were discovered with RXTE \citep{Marshall99}. An orbital period of 60.6 days was first proposed by \citet{Corbet09}, based on the separation of a few outbursts. However, analysis of a series of outbursts in 2015 led to a refined orbital solution with an orbital period of 37.9 days \citep{Kuehnel+16}. No optical companion has been identified yet, but the source is considered a BeXRB due to its position on the Corbet diagram. The closest counterpart measured by Gaia is located at an angular offset of $17\arcsec.3$, at a distance of $2.7^{+2.4}_{-1.5}\,$kpc. In 2015, the source showed a new bright outburst \citep[and references therein]{Finger15}, during which GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/xtej1859.html}.} measured a strong spin-up rate of $\dot{\nu}\sim1.6\times10^{-11}\,$Hz\,s$^{-1}$, similar to the rate observed in 1999 \citep{Corbet09}. \subsubsection{KS 1947+300} KS 1947+300 was first discovered with Mir-Kvant/TTM \citep{Borozdin90} and successively re-discovered with BATSE as the pulsating source GRO J1948+32 with a spin period of $\sim19\,$s \citep{Chakrabarty95}. These were later identified as the same source, KS 1947+300 \citep{Swank00}. The orbital period is $42\,$days, while the binary orbit is almost circular, $e\sim0.03$ \citep{Galloway+04}. The Gaia measured distance is $15.2^{+3.7}_{-2.8}\,$kpc, approximately consistent with the distance of $\sim10\,$kpc measured by \citet{Negueruela03} and that of $10.4\pm0.9\,$kpc measured by \citealt{Riquelme+12}, who also derived the stellar type (B0V) of the optical companion. KS 1947+300 is the only known BeXRB with an almost circular orbit that shows both Type I and II outbursts. During these outbursts, the source shows a spin-up trend, with a rate measured for the 2013 Type II outburst of $(2-4)\times10^{-11}\,$Hz\,s$^{-1}$ \citep{Galloway+04,Ballhausen16,Epili16}, while the source is spinning down between outbursts at an average rate of $-8\times10^{-13}\,$Hz\,s$^{-1}$, as measured by GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/ks1947.html}.}. \subsubsection{2S 1417--624} Pulsations with a period of $17.6\,$s were discovered from 2S 1417--624 with SAS-3 observations in 1978 \citep{Apparao80,Kelley81}. This source shows both Type I and Type II outbursts, as well as decade-long quiescent periods. The orbital period is $42\,$days \citep{Finger96}, with the optical counterpart identified as a B-type (most likely a Be-type) star located at a distance of $1.4-11.1\,$kp \citep{Grindlay84}, while the measured Gaia distance\footnote{Recently \citet{Ji+2019} adopted a different Gaia counterpart to the source, which has a distance of $9.9^{+3.1}_{-2.4}\,$kpc. This estimated distance is however inconsistent with the inferred distance of $\sim20\,$kpc calculated using accretion-driven torque models.} is $3.8^{+2.8}_{-1.8}\,$kpc. The secular slow spin-down trend observed during quiescence is overshadowed by the large spin-up induced during its Type II outbursts, observed to be\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/2s1417.html}}, $1.3\times10^{-12}\,$Hz\,s$^{-1}$ \citep{Raichur10}. Recently, 2S 1417--624 entered a new giant outburst episode at an orbital phase of $\sim0.30$, similar to the previous outburst in 2009 \citep{Gupta18,Nakajima18,Ji+2019}. \subsubsection{SMC X-3} SMC X-3 was discovered with SAS-3 as a bright source in the Small Magellanic Cloud \citep{Clark78}. However, it was not until 2004 that Chandra data analyzed by \citet{Edge04} recognized this source as an $\sim8\,$s pulsar found by \citet{Corbet04} with RXTE. The orbital period of the binary system is $45\,$days \citep{Townsend+17}, and the optical companion is a B1-1.5 IV-V star \citep{McBride08}. In 2016, the source underwent a Type II outburst that reached a super-Eddington bolometric peak luminosity of $2.5\times10^{39}\,$erg\,s$^{-1}$, making SMC X-3 a BeXRB system that is also a ULX source \citep{Townsend+17}. During that episode\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/smcx3.html}}, the NS spun-up at an outstanding rate of $6.8\times10^{-11}\,$Hz\,s$^{-1}$ \citep{Townsend+17}. Conversely, the long-term (measured from 1998 to 2012) spin-down trend has an average rate that is about 500 times slower, $-1.4\times10^{-12}\,$Hz\,s$^{-1}$~\citep{Klus14}. \subsubsection{EXO 2030+375} \begin{figure}[!t] \includegraphics[width=0.45\textwidth]{Orbital_jumps.pdf} \caption{Blue y-axis: The evolution of the orbital phase shift of Type I outbursts from EXO 2030+375 as measured with Swift/BAT (blue dots). Recently, the outbursts peak at $\sim0.1$ in orbital phase, a behavior that seems to be recurring with a periodicity of $\sim20\,$yr (see the text). ed y-axis: the evolution of the NS spin frequency (corrected for the orbital motion) as measured with GBM (red dashed line). The pulsar is now entering a new spin-up phase, after about $2000\,$days of spin-down, similar to what observed $\sim20\,$yr ago.} \label{fig:phase_shifts} \end{figure} EXO 2030+375 is a transient source discovered with EXOSAT \citep{Parmar+89}. The NS spin period is $41.7\,$s, while the orbital period is $\sim46\,$days \citep{Wilson+08}. The orbit of the NS around the O9-B2 stellar companion is eccentric, $e=0.4190$. Since its discovery, the source has shown both Type I and II outbursts. Type I episodes have been occurring nearly every orbit for $\sim28\,$yr with a typical duration of about $7-14\,$days, while Type II outbursts can last as long as $80\,$days. EXO 2030+375 is the XRP with the largest number of observed Type I outbursts ($\sim150$), detected in the X-ray band by many space-based observatories, e.g., Tenma, Ginga, ASCA, BATSE, RXTE, and more recently with Swift/BAT and GBM \citep{Laplace+17}. The long-term spin derivative trend observed by both BATSE and GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/exo2030.html}.} is spin-up at a mean rate of $\dot{\nu}\sim1.3\times10^{-13}\,$Hz\,s$^{-1}$. During such long-term spin-up periods, outbursts occur typically $5-6\,$days after periastron passage. However, the source has shown two torque reversals, one of which was ongoing in 2019. The first torque reversal occurred in 1995, which was first preceded by a ${\sim}3\,$yr quiescent period, accompanied later by a shift in the outburst peak to $8-9$ days earlier than the preceding outbursts ($3-4\,$days before periastron; \citealt{Reig+Coe98,Wilson+02}). Recently, the source has shown another quiescent period ($\sim1\,$yr), after which the resumed activity was characterized by similar properties to those observed $\sim20\,$yr before - a shift in the outburst peak orbital phase and a spin-down trend. This behavior highlights a possible $21\,$yr cycle due to Kozai--Lidov oscillations in the Be disk \citep{Laplace+17}. According to \citet{Laplace+17}, the shift in the peak orbital phase is $\sim0.15$ over the past cycle. To verify their predictions, we calculated the orbital shift of the outburst peak using the Swift/BAT monitor. To achieve this, we modeled each Type I outburst observed by the BAT with a skewed Gaussian profile, whose peak was taken to the corresponding outburst peak time. These are shown in Fig.~\ref{fig:phase_shifts} as a function of time from 2016 January (MJD 57400) up to 2019 October (MJD 58700). At the time of writing, GBM recorded the start of a new spin-up phase, similar to what was observed in the previous cycle \citep{Laplace+17}. This supports the hypothesis formulated by those authors about a $\sim20\,$yr periodicity in the X-ray behavior of EXO 2030+375. \subsubsection{MXB 0656--072} Despite the discovery of MXB 0656--072 more than $40\,$yr ago with SAS-3 \citep{Clark75}, it took almost $30\,$yr years to detect any pulsations from this source with RXTE. RXTE observed the source to have a spin period of $\sim160\,$s \citep{Morgan+03}. The orbital period is about $100\,$days \citep{Yan12}, and the optical companion is an O9.7 Ve star \citep{Pakull03, Nespoli12}. The Gaia measured distance is $5.1^{+1.4}_{-1.0}\,$kpc, consistent with the distance derived from optical analysis of the companion spectrum \citep{McBride06}. So far, the source has shown only Type I outbursts, with a peak luminosity of $<10^{37}\,$erg\,s$^{-1}$. The source has also shown fast spin-up during accretion. A spin-up trend of $\dot{\nu}\sim5\times10^{-12}\,$Hz\,s$^{-1}$ (that is about $0.45\,$s in $30\,$days) was observed in the 2003 outburst \citep{McBride06}. The last series of Type I outbursts observed from this source dates back to the period between $2007$ and $2009$ \citep{Yan12}; afterwards, the source entered a quiescent period that is still presently ongoing. The spin-up rate measured by GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/mxb0656.html}} during the last of those outbursts was comparable to that measured in 2003. \subsubsection{GS 0834--430} Pulsations from GS 0834--430 were first observed with Ginga, each lasting $12\,$s \citep{Aoki92}. The orbital period was measured to be $106\,$days by \citet{Wilson97}. This was determined by using the spacing between the first five of seven outbursts observed between 1991 and 1993, while the last two were spaced by about $140\,$days. The optical counterpart is a B0-2 III-Ve type star and estimated to be located at a distance of $3-5\,$kpc, inferred from the luminosity type \citep{Israel00}. This was later found to be consistent with the measured Gaia distance of $5.5^{+2.5}_{-1.7}\,$kpc for the closest counterpart located at $5.4\arcsec$ from the nominal source position. The average spin-up rate during the first outbursting period was about $6\times10^{-12}\,$Hz\,s$^{-1}$ \citep{Wilson97}, while the spin-up rate measured by GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/gs0834.html}} during the last outburst in 2012 was found to be $1.1\times10^{-11}\,$Hz\,s$^{-1}$ \citep{Jenke12}. \subsubsection{GRO J2058+42} Pulsations from this source were discovered by BATSE to have a spin period of 198 s during a giant X-ray outburst in 1996 \citep{Wilson96}. Subsequent observations of the source found an orbital period of about $110\,$days \citep{Wilson+98} and were consequently identified later as a BeXRB system \citep{Wilson+05}. The first estimation of the distance to the source \citep{Wilson+98} was found to be $7-16\,$kpc away, consistent with the GAIA distance of $8.0_{-1.0}^{+1.2}\,$kpc. During the giant outburst in 1996, the source showed spin-up at a rate of $1.7\times10^{-11}\,$Hz\,s$^{-1}$ \citep{Wilson+98}. GRO J2058+42 was observed by GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/groj2058.html}.} only during the recent bright X-ray outburst \citep{Malacaria19}, when the source showed a spin-up rate similar to that reported in 1996. It was previously unobserved by GBM, thus, representing the most recent addition to the GBM Pulsar catalog. \subsubsection{A 0535+26}\label{subsec:a0535} Pulsations from A 0535+26 were discovered by Ariel 5 with a period of $103\,$s \citep{Coe+75,Rosenberg+75}. The system has an orbital period of $111\,$days \citep{Nagase+82}. The optical counterpart is HD 245770, an O9.7-B0 IIIe star located at a distance of $\sim2\,$kpc \citep{Hutchings+78,Li+79,Giangrande+80, Steele+98}. This distance was later confirmed by Gaia to be $2.1^{+0.3}_{-0.2}\,$kpc. The system regularly shows Type I outbursts, separated by both quiescent phases and Type II episodes (see \citealt{Motch+91, Mueller+13}, and references therein). Similar to GX~$1+4$ and GRO J1008--57 (see Sections~\ref{subsubsec:gx1+4} and \ref{subsec:groj10}, respectively), little to no spin-up is detected during Type I outbursts of this source. However, large spin-up trends have been observed during Type II outbursts. The source is found to be spinning down during quiescence\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/a0535.html}}. The average spin-down rate is $1.4\times10^{-11}\,$Hz\,s$^{-1}$ (see, e.g., \citealt{Hill+07}), while the measured spin-up rate during the giant outburst episodes is $\sim(6-12)\times10^{-12}\,$Hz\,s$^{-1}$ \citep{Camero+12,Sartore+15}. \subsubsection{IGR J19294+1816} IGR J19294+1816 was initially discovered with International Gamma-Ray Astrophysics Laboratory (INTEGRAL; \citealt{Turler09}), and later recognized as a pulsating source by Swift \citep{Rodriguez09a,Rodriguez09b}, with a pulsation period of $\sim12\,$s. An orbital period of $117$ days has been proposed, although this remains uncertain \citep{Corbet+Krimm09,Rodriguez09b,Bozzo11}. The Gaia measured distance is $2.9^{+2.5}_{-1.5}\,$kpc. However, independent measurements of the distance report inconsistent values. A lower limit has been estimated to be ${>}8\,$kpc for a B3 I optical counterpart by \citet{Rodriguez09a}, while a distance of $11\,$kpc was inferred by \citet{RR18} for a B1 Ve counterpart. Inspection of the GBM data\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/igrj19294.html}} reveals a secular spin-down trend with $\dot{\nu}\sim-2\times10^{-12}\,$Hz\,s$^{-1}$, interrupted by local spin-up episodes accompanying accretion during outbursts, with an average spin-up of about $\dot{\nu}\sim2.5\times10^{-11}\,$Hz\,s$^{-1}$. GBM observations support the $\sim$117\,days periodicity. According to the most recent observations by GBM and \textit{XMM} \citep{Domcek19}, the source still exhibits a long-term spin-down trend, with Type I outbursts at each periastron passage. \subsubsection{GX 304-1}\label{subsec:gx304} Pulsations with a period of $\sim272\,$s from GX 304-1 were discovered with SAS-3 in 1978 \citep{McClintock77}. The orbital period is about $132\,$days, and the optical companion has been identified as a B2 Vne type star. The distance measured by Gaia to the companion, was found to be $2.01^{+0.15}_{-0.13}\,$kpc, in agreement with a previously measured distance of $2.5\,$kpc \citep{Mason78,Parkes+80}. The source typically shows both Type I and II outbursts, as well as long ($\sim$yr) quiescent periods. According to GBM observations\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/gx304m1.html}}, the source shows accretion-driven spin-up episodes at a rate of about $\dot{\nu}\sim1\times10^{-12}\,$Hz\,s$^{-1}$ during active periods and long-term spin-up trends at an average rate of about $\dot{\nu}\sim1.3\times10^{-13}\,$Hz\,s$^{-1}$. A spin-down rate between outbursts of $\dot{\nu}\sim-5\times10^{-14}\,$Hz\,s$^{-1}$ \citep{Malacaria+15,Sugizaki+15} has also been observed in the data. Recently, the source has entered a new period of quiescence, probably due to major disruptions of the Be disk following a Type II outburst, showing only sporadic X-ray activity \citep{Malacaria+17}. \subsubsection{RX J0440.9+4431} Pulsations with a period of $202\,$s from RX J0440.9+4431 were discovered with RXTE \citep{Reig99}. The binary orbital period is $150\,$days (\citealt{Ferrigno+13}, and references therein), and the optical companion is a B0.2 Ve star with a Gaia measured distance of $3.2^{+0.7}_{-0.5}\,$kpc, consistent with previous measurements of $\sim3.3\,$kpc \citep{Reig05}. Only Type I outbursts have been observed from this source, the first and brightest of which was detected in 2010 \citep{Usui12}, exhibiting a spin-up rate of about $\dot{\nu}\sim4.5\times10^{-12}\,$Hz\,s$^{-1}$ in GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/rxj0440.html}.}. A strong, long-term spin-down trend has also been observed between the first pulsations discovered from the source in 1999 ($\approx206\,$s) and the outburst analyzed $12\,$yr later. Pulsation periods from the latter were measured at $\approx202\,$s, resulting in a spin-down rate of about $\dot{\nu}=-3\times10^{-12}\,$Hz\,s$^{-1}$ \citep{Ferrigno+13}. \subsubsection{XTE J1946+274} Pulsations with a period of $\sim15\,$s were first detected from XTE J1946+274 by RXTE \citep{Smith98}. The orbital period is about $169\,$days \citep{Wilson03}, and the optical companion is a B0-1 IV-Ve star with a measured Gaia distance of $12.6^{+3.9}_{-2.9}\,$kpc, consistent with previous measurements of $8-10\,$kpc (\citealt{Verrecchia02,Wilson03,Riquelme+12}). A number of Type I and II outbursts have been observed from this source, as well as long quiescent periods\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/xtej1946.html}}. During accretion, the source shows strong spin-up at an average rate of $\dot{\nu}\sim(5-10)\times10^{-12}\,$Hz\,s$^{-1}$ \citep{Wilson03,Doroshenko17}. During quiescent periods, the source spins-down with a rate of about $\dot{\nu}=-2.3\times10^{-13}\,$Hz\,s$^{-1}$ over a long-term trend. Recently, the source has shown another bright outburst episode, monitored with GBM and NICER \citep{Jenke18_J1946}. Analysis of these data is ongoing (Mailyan B. et al. 2020, in preparation). \begin{figure*}[!t] \includegraphics[width=1.\textwidth]{2S1845_torque2.png} \caption{The spin history of the BeXRB 2S 1845--024 as observed by BATSE (blue) and GBM (green). Errors are smaller than data points. Two separated linear fits are shown as straight lines for the BATSE (blue) and GBM (green) data. The estimated torque time is inferred from the intersection of the linear fit lines, $53053\pm250$ MJD (see Sect.~\ref{subsec:2s1845}). The grea shaded area marks the period where neither BATSE nor GBM data are available.} \label{fig:2S1845_torque} \end{figure*} \subsubsection{2S 1845--024}\label{subsec:2s1845} Pulsations from 2S 1845--024 (GS 1843--024) were discovered with Ginga with a spin period of $\sim30\,$s \citep{Makino88}. The orbital period is about $242\,$days \citep{Zhang96,Finger+99}. No optical counterpart is currently known for this system. However, based on the Corbet diagram and the regularity of the observed outbursts, this source has been identified as a BeXRB. No Gaia measurement of the distance is available for this source in the DR2, but an inferred distance of $\sim10\,$kpc has been obtained from the analysis of the X-ray spectral properties of the source \citep{Koyama90}. No Type II outbursts have been observed from this source. A secular spin-up trend has been measured with BATSE during the first $\sim5.5\,$yr after its discovery. It results from fast local spin-up occurring during outburst episodes. This was found to occur at a rate of $\dot{\nu}\sim4\times10^{-12}\,$Hz\,s$^{-1}$, which yielded a long-term spin-up trend with a rate of $\dot{\nu}\sim2.7\times10^{-13}\,$Hz\,s$^{-1}$ \citep{Finger+99}. More recently, the source has inverted its long-term trend and has now been in a spin-down phase for around $10\,$yr.\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/2s1845.html}} The strength of the local spin-up episodes associated with outbursting episodes ($\dot{\nu}\sim3.5\times10^{-12}\,$Hz\,s$^{-1}$), as well as that of the long-term spin-down trends ($\dot{\nu}\sim-2.4\times10^{-13}\,$Hz\,s$^{-1}$), is similar to the strength of those preceding the torque reversal. A comprehensive spin history for this source is shown in Fig.~\ref{fig:2S1845_torque}. An estimation of the torque reversal time can be inferred assuming that the long-term linear trends seen separately for BATSE data up to 51560 MJD and after 56154 MJD for GBM data,can be extrapolated to periods where neither BATSE nor GBM data were available. This returns a torque reversal time of $53053\pm250$ MJD, where the uncertainty is derived by extrapolating the two separate linear fits within the uncertainty of their parameters. \subsubsection{GRO J1008--57}\label{subsec:groj10} Pulsations from GRO~J1008--57 were discovered by CGRO during an X-ray outburst in 1993 \citep{Stollberg1993}. The NS has a spin period of about $93.5\,$s, while the binary orbital period is $\sim248\,$days \citep{Levine+06, Coe+07,Kuehnel+13}. The optical counterpart is a either a dwarf (luminosity class III) or a supergiant (V) O9e-B1e type star \citep{Coe+94}. There is no available Gaia distance for this source, but \citet{Riquelme+12} estimates the system to be at a distance of either $9.7$ or $5.8\,$kpc, according to the luminosity type of the companion star. As with GX~$1+4$ (see Sect.~\ref{subsubsec:gx1+4}) and A0535+26 (see Sect.~\ref{subsec:a0535}), the source exhibits a secular spin-down trend interrupted by brief spin-up episodes correlated with bright flux levels, typical of Type II outbursts\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/groj1008.html}}. The spin-up rate observed during the 2012 giant outburst is $6\times10^{-12}\,$Hz\,s$^{-1}$, while the secular spin-down rate is about $-2.3\times10^{-14}\,$Hz\,s$^{-1}$, induced by the propeller accretion mechanism in that regime \citep{Kuehnel+13}. The source typically undergoes an outburst at each periastron passage, with recent activity characterized by peculiar outburst light curves with $2-3$ peaks and a peak luminosity of several times $10^{37}\,$erg\,s$^{-1}$ \citep{Nakajima14, Kuhnel17}. Applying the orbital solution found for this source by \citet{Kuehnel+13} still shows orbital signatures in GBM data, and we therefore do not consider its pulse frequency history as demodulated. \subsubsection{Cep X-4} Pulsations from Cep X-4 at $\sim66\,$s were first detected with Ginga \citep{Makino+88}. Only a handful of outbursts with a relatively low luminosity have been observed from this source; thus, the orbital elements for this binary system are still unknown. However, a possible orbital period of about $21\,$days has been suggested by \citet{Wilson99} and \citet{McBride07}. The optical counterpart has been identified as a possible B1-2 Ve star \citep{Bonnet98}. The same authors have tentatively estimated a distance to the source of about $4\,$kpc, but this value has been challenged by \citet{Riquelme+12} who proposed a distance of either $7.9$ or $5.9\,$kpc according to whether the stellar type of the companion is a B1 or B2 star, respectively. The distance of $4\,$kpc also does not agree with the measured Gaia distance of $10.2^{+2.2}_{-1.6}\,$kpc. Between 1993 and 1997, \citet{Wilson99} used BATSE data to measure an average spin-down rate of $\dot{\nu}\sim-4\times10^{-14}\,$Hz\,s$^{-1}$. Spin-up has also been observed during accretion episodes at a rate of $\sim10^{-12}\,$Hz\,s$^{-1}$. At the time of writing, the source is still showing a general spin-down trend\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/cepx4.html}}. \subsubsection{IGR J18179--1621} Pulsations from IGR J18179--1621 were discovered by Swift with a spin period of about $12\,$s \citep{Halpern12}. This was later confirmed by Fermi/GBM \citep{Finger12} detections of its pulsations\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/igrj18179.html}}. The only activity reported from this source is the same that led to its discovery in 2012, when the source slightly brightened and became detectable by INTEGRAL and other X-ray satellites (see \citealt{Bozzo12}, and references therein). The nature of the optical companion is uncertain, but the analysis of the spectral characteristics of the source along with the presence of pulsations suggest that it belongs to the class of HMXBs/BeXRBs \citep{Nowak12,Tuerler12}. There is no measured Gaia distance for this source in the DR2, but a value of $8.0^{+2.0}_{-7.0}\,$kpc was found by \citet{Nowak12}. \subsubsection{MAXI J1409--619} Pulsations from MAXI J1409--619 were discovered by Swift, with an NS spin period of about $500\,$s \citep{Kennea+10} and later confirmed by GBM \citep{CameroAtel10}, which also detected a spin-up during a follow-up observation, as the source re-brightened a few weeks after its discovery. In the GBM data\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/maxij1409.html}}, the frequency increased at a rate of $1.6\times10^{-11}\,$Hz\,s$^{-1}$ during the outburst observed in 2010 December. Only a handful of observations have been carried out for this source immediately following its discovery. Consequently, very little is known about it. Given the shape of its light curve, the characteristics of its X-ray spectrum, its location close to the Galactic plane, and its vicinity of an infrared counterpart (2MASS 14080271--6159020), MAXI J1409--619 has been suggested to be an SFXT candidate \citep{Kennea+10}. There is no Gaia counterpart consistent with the X-ray source position as measured by Swift/XRT, but the distance to this source has been measured by \citet{Orlandini12} as $14.5\,$kpc. After the last GBM observation in 2010 December, the source has remained in a state of quiescence. \subsubsection{XTE J1858+034} Pulsations at a spin period of about $221\,$s from XTE J1858+034 were discovered by RXTE \citep{Remillard98, Takeshima98}. The source was discovered during one of only a few recorded outburst episodes (see also \citealt{Molkov04}), the last one being recorded by GBM in $2010$\footnote{\url{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/xtej1858.html}} \citep{Krimm10J1858}. The orbital period of the binary is currently unknown, and the spectral type of the companion is still uncertain, but there are indications that it is a Be-type star \citep{Reig04atel,Reig05}. The closest counterpart measured by Gaia is located at an angular offset $3.5$\arcsec\, from the nominal source position, at a distance of $1.55^{+0.28}_{-0.21}\,$kpc. There are no other available counterparts for this source. The spin-up measured by GBM (uncorrected for binary modulation) during the 2010 accretion episode is about $1\times10^{-11}\,$Hz\,s$^{-1}$. \subsection{Persistent binary systems} \subsubsection{4U 1626--67} 4U~1626--67 is an LMXB discovered by Uhuru in 1977 \citep{Giacconi+72}. The NS spins with a period of $7.66\,$s while orbiting its companion star, \emph{KZ TrA}, in only $42\,$minutes \citep{Middleditch+81,Chakrabarty98}. The optical companion is a very low-mass star (${<}0.1\,M_{\odot}$; \citealt{McClintock+77,McClintock+80}). The Gaia measured distance to the star is $3.5^{+2.3}_{-1.3}\,$kpc, consistent with more recent measurements of $3.5^{+0.2}_{-0.3}\,$kpc by \citet{Schulz19}. Since its discovery, the source has shown two major torque reversal episodes. The first was estimated to happen in 1990 \citep{Wilson+93, Bildsten+94} when the source switched from a steady spin-up trend, with a rate of $\dot{\nu}=8.5\times10^{-13}$\,Hz\,s$^{-1}$ that was observed for over a decade, to a $\sim7\,$yr long steady spin-down trend, with a rate of $\dot{\nu}=-3.5\times10^{-13}$\,Hz\,s$^{-1}$ \citep{Chakrabarty+97}. The second torque reversal episode was observed by GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/4u1626.html}} and Swift/BAT in 2008, when the source started a new spin-up trend. As of 2019 November, the source is still spinning up, with a mean rate of $\dot{\nu}=4\times10^{-13}$\,Hz\,s$^{-1}$ \citep{Camero+10}. \subsubsection{Her X-1} Her X-1 was discovered by Uhuru in 1971 \citep{Tananbaum1972}. It is an eclipsing LMXB and one of the most studied accreting X-ray pulsars. The NS spin period is about $1.2\,$s, while the orbital period is $\sim1.7\,$days. The optical companion is an A7-type star \citep{Reynolds+97}, with a Gaia measured distance of $5.0^{+0.8}_{-0.6}\,$kpc, consistent with previous measurements of $6.1^{+0.9}_{-0.4}\,$kpc \citep{Leahy+14}. The system exhibits super-orbital X-ray modulation with a period of $\sim35\,$days, most likely driven by a precessing warped accretion disk (see \citealt{Scott+00,Leahy+Igna10,Kotze+12} and references therein) or by a precessing NS \citep{Postnov+13}. The general trend of the pulse period evolution is that of spin-up at an average rate of $\dot{\nu}=5\times10^{-13}\,$Hz\,s$^{-1}$ (see, e.g., \citealt{Klochkov09}). It occasionally exhibits spin-down episodes of a moderately larger entity, $\dot{\nu}=-7\times10^{-12}\,$Hz\,s$^{-1}$ \citep{Bildsten1997}. Recently, GBM measurements\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/herx1.html}} have shown that the spin derivative trend has flattened around a spin frequency of $807.937\,$mHz. \subsubsection{Cen X-3} The bright, persistent source, Cen X-3, was discovered by the \emph{Uhuru} satellite in 1971 and marks the first observation of this accreting X-ray pulsar \citep{Giacconi1971}. The NS spin period is about $4.8\,$s, and the orbital period of the NS is $\sim2.1\,$days \citep{Kelley+83,Falanga+15} in an almost circular orbit ($e<0.0016$) around V779~Cen, an O6-7 supergiant companion star \citep{Krzeminski74,Hutchings+79, Ash99}. The Gaia measured distance is $6.4^{+1.4}_{-1.1}\,$kpc, consistent with previous measurements of $5.7\pm1.5\,$kpc \citep{Thompson09}. Optical observations first revealed the presence of an accretion disk around the pulsar~\citep{Tjemkes+86}. X-ray monitoring of the source later on found that the secular trend of the NS frequency is to spin-up; although, long spin-down periods have also been observed \citep{Nagase89CenX3}. GBM observations show that typical spin-up rates are of the order of $\dot{\nu}=3\times10^{-12}\,$Hz\,s$^{-1}$. Although the NS is accreting from a disk, there appears to be no correlation between the spin derivative and the observed X-ray flux in Cen~X-3 \citep{Tsunemi+96,Raichur+08a,Raichur+08b}. Instead, the high, aperiodic source variability is likely due to a radiatively warped accretion disk \citep{Iping+90} that does not reflect a real modulation of the accretion rate. The most likely scenario is that the mass transfer in this system is dominated by RLO with wind-accretion (and wind-captured disk) contributions \citep[and references therein]{Walter15,ElMellah19}. Orbital decay has also been observed for this source, at an average rate of $\dot{P}_{orb}/P_{orb}=-1.800(1)\times10^{-6}\,$yr$^{-1}$, and interpreted in terms of tidal interaction plus rapid mass transfer between the NS and its massive companion \citep{Nagase+92,Falanga+15}. The long-term spin derivative is likely due to an accretion disk moving with alternating rotational direction, with a proposed periodicity in the Ginga data of about $9\,$yr \citep{Tsunemi+96}. However, the combined BATSE and GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/cenx3.html}} spin frequency and pulsed flux history for this source reveal a more complex behavior and show alternating long-term spin-down and spin-up trends with random-walk variations superimposed \citep{deKool+93}. \subsubsection{4U 1538--52} Pulsations from 4U 1538--52 were first discovered in 1976 by the Ariel 5 mission, which revealed a spin period of $530\,$s \citep{Davison+77}. The NS is in a $3.7\,$day, slightly eccentric ($e\sim0.2$) orbit \citep{Davison+77,Corbet+93,Clark00}. The supergiant companion is a B0e-type star, called QV Nor \citep{Parkes+78}. The Gaia measured distance is $6.6^{+2.2}_{-1.5}\,$kpc, consistent with a previous measurement of $5.8^{+2.0}_{-1.9}\,$kpc by \citet{Guver10}. Early observations of this source show a long-term spin-down trend with random short-term variations \citep{Makishima+87,Nagase89}, while later observations find a reverse of the general long-term trend, probably happening in 1988. The average spin-up rate of $\dot{\nu}=1.8\times10^{-14}\,$Hz\,s$^{-1}$, is of the same order of magnitude as the previous spin-down period \citep{Rubin+97}. This spin-up period went on at least until 2006 \citep{Baykal+06}. As of 2019 November, GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/4u1538.html}} has been observing the source in a new spin-down trend since the beginning of its operations in 2008. The binary source also shows hints of orbital decay, $\dot{P}_{orb}/P_{orb}=(0.4\pm1.8)\times10^{-6}\,$yr$^{-1}$ \citep{Baykal+06}, similar to the value observed in other X-ray binaries, yet consistent with a null value. \subsubsection{Vela X-1} X-ray pulsations from Vela X-1 were discovered by the SAS-3 satellite in 1975 \citep{McClintock+77}, revealing a spin period of about $283\,$s. The X-ray source is eclipsing, with an orbital period of $\sim8.9\,$days (\citealt{Falanga+15}, and references therein). The orbit is almost circular ($e\sim0.09$) around HD77581, a B0.5Ia, supergiant \citep{Brucato+Kristian72,Hiltner+72,vanKerkwijk+95}. The Gaia measured distance of $2.42^{+0.19}_{-0.17}\,$kpc is consistent with previous measurements of $2.0\pm0.2\,$kpc \citep[and references therein]{Gimenez16}. The ellipsoidal variation of the optical light curve of the stellar companion suggests the star is distorted due to the tidal forces from the NS \citep[and references therein]{Koenigsberger12}. The NS is believed to be wind-fed as the evolution of the spin frequency does not show any steady long-term trend, but is instead observed to take a random-walk path \citep{Deeter+89,deKool+93}. However, the geometrical configuration of the binary system is such that a more complex phenomenology needs to be taken into account. Both simulated and observational studies (see, e.g., \citealt{Blondin+91,Kaper+94,Sidoli14,Malacaria+16}) have shown that three different structures are present in the binary: a photoionization wake due to the Str{\"o}mgren sphere around the NS, a tidal stream due to the almost filled Roche lobe of the donor, and a turbulent accretion wake in which transient accretion disks with alternating directions of rotation form around the NS (see also \citealt{Fryxell+Taam88,Blondin+12}). Recent works also show that wind-captured transient disks can form in the ambient wind of Vela X-1 (see, e.g., \citealt{ElMellah19}). Those transient accretion disks are thought to be responsible for the sparse spin-up/spin-down episodes observed by GBM at random epochs\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/velax1.html}}. The spin-up rate observed during such episodes is of the order of $\dot{\nu}=1-2\times10^{-13}\,$Hz\,s$^{-1}$, with milder rates for spin-down episodes, $\dot{\nu}=-2\times10^{-14}\,$Hz\,s$^{-1}$. With a magnetic field of $2.6\times10^{12}\,$G \citep[and references therein]{Fuerst+14} and an average luminosity of $5\times10^{36}\,$erg\,s$^{-1}$, and using $\Pi_{\rm su}=8$ from \citet{Shakura14a}, the quasi-spherical settling accretion model (Eq.~\ref{eq:QSAMup}) predicts a spin-up value of $\dot{\nu}=3\times10^{-14}\,$Hz\,s$^{-1}$, a factor of ${\sim}4$ weaker than what was actually observed. On the other hand, the spin-up rate expected by accretion-disk theory (see Eq.~\ref{eq:parmar}) is $\sim1.4\times10^{-12}\,$Hz\,s$^{-1}$, almost an order of magnitude faster than the observed one. Up to now, no clear correlation has been reported between the observed spin derivative episodes and the X-ray luminosity for this source. \subsubsection{OAO 1657--415} Pulsations from OAO 1657--415 were discovered by HEAO-1 with a spin period of about $38\,$s \citep{WhitePravdo1979}. The orbital period is $10.4\,$days, with an eclipse occurring for $1.7\,$days \citep{Chakrabarty+93}. Accretion onto the NS is fed by an optical companion that is currently believed to be in a transitional stage between an OB and Wolf Rayet star, of the spectral type Ofpe/WN9 \citep{Mason2009}. The closest counterpart measured by Gaia is located at an angular offset of $4\arcsec.7$ from the nominal source position, at a distance of $2.2^{+0.7}_{-0.5}\,$kpc. However, previous measurements locate the system at a distance of about $4.4-12\,$kpc \citep{Chakrabarty02, Mason09}, consistent with the measurement of $7.1\pm1.3\,$kpc by \citet{Audley06}. The pulsar exhibits a secular spin-up trend at an average rate of $\dot{\nu}\sim8.5\times10^{-13}\,$Hz\,s$^{-1}$, superimposed with several spin-up/spin-down episodes throughout its long history of observation. Analysis using BATSE and GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/oao1657.html}} data allowed \citet{Jenke2012} to establish two different modes of accretion: one resulting from transient, disk-driven accretion that leads to steady spin-up periods correlated with flux (see also \citealt{Baykal97}), while the other results from wind-driven accretion, in which the NS spins-down at a (slower) rate that is uncorrelated with flux. Recently, a ``magnetic levitating disk'' scenario has been proposed to explain the spin evolution in OAO 1657--415 \citep{Kim+17}. \subsubsection{GX 301-2} GX 301-2 pulsations at $\sim680\,$s were discovered by Ariel 5 in 1976 \citep{White+76}. The NS is in an eccentric ($e\sim0.5$), $41.5\,$day orbit \citep{Sato+86} around Wray~$77$, a hyper-giant, B1~Ia+ star \citep{Vidal73,Parkes+80,Kaper+95}. The measured Gaia distance is $3.1^{+0.6}_{-0.5}\,$kpc, consistent with a previously estimated distance of $3.1\,$kpc \citep{Hammerschlag+79,Kaper+06}. GX 301-2 is a near-equilibrium rotator; thus, its net spin derivative is equal to zero. The secular spin period evolution is generally smooth and consistent with a random-walk evolution \citep{deKool+93}. However, the source has shown rapid ($\dot{\nu}=(3-5)\times10^{-12}\,$Hz\,s$^{-1}$) and prolonged ($\sim30\,$days) spin-up episodes \citep{Koh+97}, probably indicating the formation of a transient accretion disk. Recently, GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/gx301m2.html}} observed another similar episode over a longer period ($\sim40\,$days), which was also stronger ($\dot{\nu}\approx6\times10^{-12}\,$Hz\,s$^{-1}$) than was previously observed (\citealt{Nabizadeh19, Abarr20}; Malacaria C. et al. in preparation), during which NuSTAR measured an unabsorbed luminosity of $\sim1.5\times10^{37}\,$erg\,s$^{-1}$ at a corresponding spin-up rate of $\dot{\nu}\approx3.6\times10^{-13}\,$Hz\,s$^{-1}$ as measured by GBM around the NuSTAR observation. This is a factor of three lower than the spin-up rate predicted by Eq.~\ref{eq:QSAMup} of $\sim1.1\times10^{-12}\,$Hz\,s$^{-1}$. On the other hand, the spin-up rate predicted by the accretion-disk theory (see Eq.~\ref{eq:parmar}), assuming a magnetic field of $4\times10^{12}\,$G \citep{Kreykenbohm04}, is $\sim4\times10^{-12}\,$Hz\,s$^{-1}$, about an order of magnitude stronger than the observed rate, but in agreement with the rate observed during the initial phase of the spin-up episode. \subsubsection{GX 1+4}\label{subsubsec:gx1+4} GX 1+4 is an LMXB discovered by a balloon X-ray observation in 1970 \citep{Lewin1971}, with an NS spin period of ${\sim}2\,$minutes \citep{Lewin1971}. The orbital period is not yet well known. Old studies have reported periodic signals every $304\,$days in the optical band (e.g., \citealt{Cutler+86}) or at $1161\,$days in the X-ray band (e.g., \citealt{Hinkle+06}). However, RXTE data from GX~$1+4$ does not show modulation at any of these proposed periods \citep{Corbet+08}. The donor companion is \emph{V2116 Oph.} \citep{Glass1973}, a type M6III red-giant star that under-fills its Roche-Lobe \citep{Chakrabarty+Roche97,Hinkle+06}. Therefore, it is assumed that the NS is wind-fed by the companion, making it part of the so-called symbiotic X-ray Binaries (SyXBs; \citealt{Corbet+08, Yungelson19}). Different attempts have been made to determine the distance to the source (see, e.g., \citealt{Gonzalez+12} and references therein), but it still remains poorly constrained. The Gaia measured distance to the source is $7.5^{+4.3}_{-2.8}\,$kpc. Ten years following its discovery, the source was spinning up strongly at an average rate of $\dot{\nu}=6.0\times10^{-12}\,$Hz\,s$^{-1}$ \citep{Doty+81,Warwick+81, White+83}. No observations were recorded from 1980 and 1983, as EXOSAT did not detect the source, indicating that the flux had decreased below the sensitivity of the instrument. The source reappeared in 1987 in observations by Ginga with a lower luminosity and exhibiting a torque reversal with an average spin-down rate of $\dot{\nu}=-3.7\times10^{-12}\,$Hz\,s$^{-1}$ \citep{Makishima+88,Nagase89}. With a magnetic field of $3.7\times10^{12}\,$G \citep{Ferrigno07}, and assuming an average luminosity of $4\times10^{36}\,$erg\,s$^{-1}$ (see \citealt[and references therein]{Gonzalez+12}), the quasi-spherical settling accretion model (Eq.~\ref{eq:QSAMdown}) predicts a comparable spin-down value, $\dot{\nu}=-1.5\times10^{-12}\,$Hz\,s$^{-1}$. GBM\footnote{\url{https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/gx1p4.html}} observations show the source to still be undergoing a general spin-down trend, with occasional brief spin-up episodes corresponding to bright flux levels. \section{Discussion}\label{sec:discussion} The (almost) all-sky, continuous, long-term coverage of GBM provides fresh data that helps improve the analysis of accreting X-ray pulsars, providing enough statistics for interesting population studies. Examples of this are shown in Figures~\ref{fig:bimodality} and\ref{fig:torque}. \subsection{Bimodal spin period distribution} \begin{figure}[!t] \includegraphics[width=0.45\textwidth]{Bimodal_pspin_distribution.pdf} \caption{The spin period distribution of all GBM detected BeXRBs in the Milky Way.} \label{fig:bimodality} \end{figure} \citet{Knigge11} showed that the $P_{\rm s}-P_{\rm orb}$ distribution in BeXRBs is bimodal. While the bimodality of $P_{\rm orb}$ is only marginal, the distribution of $P_s$ has a clear gap at around $40\,$s and two distinct distributions peaking at $\sim10\,$s and $\sim200\,$s, respectively. Those authors proposed that the two distinct subpopulations are created by the type of supernovae, that was the progenitor for the formation of the NS. One is an electron-capture supernovae (ECS), which would produce NSs with shorter spin periods on average, while iron core-collapse supernovae (CCS) would produce NSs with longer spin periods. Other explanations of the bimodality have also been described in previous studies. \citet{Cheng14} proposed that the bimodal distribution is the result of two different accretion modes: (1) advection-dominated accretion flow (ADAF) disks, which are more likely to form during a Type I outburst, or (2) thin accretion disks formed during a Type II outbursts. The ADAF disk is inefficient to spin-up the NS; thus, BeXRBs experiencing more Type I outbursts will produce NSs with longer spin periods. The thin disks produced as a result of Type- I outbursts are more efficient in transferring angular momentum, and thus, BeXRBs dominated by this mechanism will spin-up the NS to the shorter spin period subpopulation. While those authors analyzed the cumulative population of BeXRBs (Milky Way, LMC, and SMC), it is of interest to test the bimodality for the Galaxy alone. GBM allowed us to constrain the ephemerides for a large number of such transients over the last decade, allowing us to probe the spin period distribution of BeXRBs in the Galaxy. This is shown in Fig.~\ref{fig:bimodality}, which shows two distinct distributions peaking at ${\sim}10\,$s and $\sim200\,$s with a clear gap at $\sim40\,$s, confirming the findings of \citet{Knigge11}. \subsection{Accretion-driven torques} \begin{figure*}[!t]\centering \includegraphics[width=1.\textwidth]{torque_transients2.pdf} \caption{Distribution of the maximum spin period derivative (absolute value) against the maximum peak bolometric luminosity normalized to the source spin period ($PL_{37}^{3/7}$) for all GBM transient sources. The turquoise, orange, and green continuous lines correspond to the GL model (Eq.~\ref{eq:torque2}) for different magnetic field values, i.e. $5\times10^{11}$, $2\times10^{12}$ and $10^{13}\,$G, respectively (all assuming a coupling factor of $k=0.5$). Model cuspids mark the value of the dimensionless parameter $n(\omega_s)=0$. Despite the lack of an orbital solution, the following sources have been included in the plot: MXB~0656--072, GS~0834--430, IGR~J19294+1816, RX~J0440.9+4431, Cep~X-4, XTE~J1858+034, GRO~J1008--57, IGR~J18179--1621, MAXI~J1409--619, and GRO J2058+42.} \label{fig:torque} \end{figure*} We analyzed the relationship between the spin-up strength and the observed luminosity for all of the GBM XRP transient sources. As a measure of the spin-up strength, we calculated the peak luminosity of the brightest outburst ever recorded over GBM lifetime for a given source and compared it with the corresponding maximum spin period derivative calculated over the same outburst. This allows us to study the dependence of these two parameters according to the Ghosh--Lamb (GL) model expressed by Equation~(\ref{eq:torque2}). For a meaningful comparison of the torque strength as a function of the luminosity, we considered the bolometric ($0.1-200\,$keV) luminosity inferred for each source as described in the Appendix~\ref{sec:bolo}. Our results are shown in Fig.~\ref{fig:torque}, together with theoretical predictions from Eq.~\ref{eq:torque2}. The figure clearly shows a correlation, where the behavior of the measured sources mostly follows the predictions from the GL model. The model predicts different spin equilibrium values, $n(\omega_s)=0$, for different magnetic field strengths. The spin equilibrium is characterized by a cuspid in the GL function and discriminates between the fast rotator regime ($\omega_s>1$, to the left side of the cuspid) where the source is spinning down, and the slow rotator regime ($\omega_s<1$, to the right side of the cuspid) where the source is spinning up. Deviations of up to a factor of three have been highlighted by \citet{Sugizaki17} between observations of spin-up rates and predictions from the GL model. However, the observed deviations were most likely ascribed to the uncertainty in the involved parameters, in particular the distance, $d$. Thanks to the Gaia DR2 (see Appendix~\ref{sec:gaiadist}), we have been able to sensibly reduce the uncertainty on this parameter and compare the model predictions with the most precise data currently available. An example of how improved distances help with our understanding of source behavior,is especially pertinent to sources like V~0332+53 (X~0331+53). \citet{Sugizaki17} find that the deviation of this source from theoretical predictions is anomalously high, concluding that its behavior is due to an overestimate of the source distance. An improved distance obtained by Gaia ($5.1^{+1.1}_{-0.8}\,$kpc) was able to better constrain the source's behavior. Although marginal deviation is still present between observations and the torque model, the difference is significant only at the $1\sigma\,$c.l. While the the majority of the Gaia distances in this paper are consistent with previous measurements, there are some sources in this work that are inconsistent with distance values estimated through other methods, e.g., by adopting the luminosity dependence of the spin-up rate (see Equation~(\ref{eq:torque2})). Such a discrepancy is likely due to a combination of factors, including the uncertainty in the physical mechanisms driving the spin evolution of accreting NSs, the poorly constrained parameters of Equation~(\ref{eq:torque2}), and the limited range over which Gaia can measure distances. The latter implies that more reliable distances can be obtained for measured parallaxes that are smaller than their uncertainties, i.e. usually below $\sim5\,$kpc \citep{Bailer-Jones18, Luri18}. Even considering the Gaia measured distances, a few sources still show considerable deviations from the GL model. Some of these have unknown or poorly known orbital solutions, namely GS~0834--430, IGR~J19294+1816, RX~J0440.9+4431, XTE~J1858+034, IGR~J18179--1621, MAXI~J1409--619, GRO~J1008--57 and GRO J2058+42. Part of the deviation is due to the lack of a timing solution; although, a few binary systems with known orbital solutions have also been observed to deviate from the GL model predictions, e.g., 4U~0115+634 and 2S 1417--624. Recently, \citet{Ji+2019} analyzed a 2018 outburst from 2S 1417-624 that employed a standard GL model to account for the spin-up shown by the source during the accretion episode. However, they had to use a coupling parameter value of $k\sim0.3$ and a distance of $20\,$kpc in order to achieve a good fit. We find that a standard value of the coupling constant ($k\sim0.5$) and a measured Gaia distance of $\sim10\,$kpc is unable to account for the observed spin-up from that outburst (see Figure~\ref{fig:torque}), in agreement with findings from \citet{Ji+2019}. \subsection{Improvement/Finding of orbital solutions} Following the technique described in Sect.~\ref{sec:timing}, we derived or updated ephemerides for a selected sample of sources: 4U 0115+63, 4U 1901+03, and 2S 1553--542. To separate the pulsar emission from the background, good CTIME data are represented by Fourier components while a background model is fit to the data and then subtracted. The background model includes bright known sources, the variation in the detector responses, the Earth occultation steps, and a remaining long-term background contribution. Details on this procedure are described in \citet{Finger+99}, \citet{Camero+10}, and \citet{Jenke+12}. The observed times are barycentered using the JPL Planetary ephemeris DE200 catalog \citep{Standish90}. The Barycentric Dynamical Time (TDB) are modulated by the binary orbital motion of the emitting pulsar, $t^{\rm em} = TDB - z$, and the binary orbital parameters can be constrained by fitting the TDB times to Equation~(\ref{eq:z_def}) (starting from an approximate, previously known solution). Orbital fits were obtained for two HMXBs due to their X-ray duty cycle. Best-fit orbital elements for those sources are presented in Table~\ref{tab:summary}. \section{Summary}\label{sec:summary} We have summarized more than $10\,$yr of Fermi/GBM accreting X-ray pulsars observations as part of the GBM Accreting Pulsars Program (GAPP). Detailed inspection of the spin history of accreting XRPs unveils a plethora of differences, highlighting the importance of continuous, wide-field monitoring observations. We showed how GBM observations are vital in addressing decade-long behaviors, such as long-term cycles in the Be disks (e.g., in EXO 2030+375) and torque reversals (e.g., in 2S 1845--024). Adherence of accretion-driven torques to the GL model for all GBM detected transient systems, as well as quasi-spherical accretion torques model predictions for a subsample of wind-accreting systems, have been tested, aided by updated source distances from Gaia DR2. This has allowed us to test the model predictions and to cross-check independent distance determinations (e.g., 2S 1417--624). Finally, we obtained new/updated orbital solutions for three accreting XRPs. Our results demonstrate the capabilities of GBM as an excellent instrument for monitoring accreting X-ray pulsars and its important scientific contribution to this field. \acknowledgments We dedicate this paper to Dr. Mark Finger, retired, who initiated the GBM Accreting Pulsars Program and was an irreplaceable part of it. We thank the anonymous referee, whose careful reading and suggestions have improved the manuscript. This research has made use of data and software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and the High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory. We acknowledge extensive use of the NASA Abstract Database Service (ADS). C.M. is supported by an appointment to the NASA Postdoctoral Program at the Marshall Space Flight Center, administered by Universities Space Research Association under contract with NASA. \facilities{Fermi/GBM, CGRO/BATSE, Swift/BAT} \software{HEASoft} \bibliographystyle{yahapj}
cond-mat/0605648
\section*{I. INTRODUCTION} Vortex dynamics in a Bose-Einstein condensate (BEC) has been studied intensively, initially in the context of superfluid helium and later in dilute trapped BECs. The motion of vortices in both uniform and inhomogeneous condensates has been the subject of many theoretical works, and extensive reviews of these efforts have been given in \cite{fetterRev,pismen}. In this paper we consider the problem of vortex motion in an asymptotically homogeneous condensate in the presence of a solid wall where the wave function of the condensate vanishes. Recent discussions (see, for example, \cite{anglin} and references therein) on the motion of vortices near the surface of trapped condensates have questioned the relevance of the method of images in describing this motion. In that case, the nonuniform condensate density is approximated by a linear function that vanishes at the Thomas-Fermi surface, and the vortex motion can be considered to arise principally from the local density gradient. Here, we consider a rather different situation, in which the condensate density approaches its bulk value within a healing length $\xi$, and the vortex is located in the asymptotically uniform region. In this latter case, the local gradient of the condensate density is very small. The motion can be interpreted as arising from an image, but the depleted surface layer induces an effective shift in the position of the image in comparison with the case of a uniform incompressible fluid. Our geometry is two dimensional, with the vortex aligned along the $z$ axis, parallel to the surface of the wall. The dynamics of the time-dependent BEC in the presence of the solid wall at $y=0$ is described by the Gross-Pitaevskii (GP) equation \begin{equation} -2 {\rm i} \psi_{t} = \nabla^2 \psi + (1 - |\psi|^2)\psi, \label{gp} \end{equation} subject to the boundary conditions \begin{equation} \psi(x,y=0,t)=0, \qquad 0\le y <\infty, \quad |x|<\infty, \label{boundary} \end{equation} in dimensionless units, such that the distance is measured in healing lengths $\xi=\hbar/\sqrt{2mgn_0}$, where $g$ is a two-dimensional coupling constant with dimension of energy times area, $m$ is the mass of the boson and $n_0$ is the bulk number density per unit area. Time is measured in units of $m\xi^2/\hbar$ and energy in units of $\hbar^2n_0/m$. In our units, the speed of sound $c$ in the bulk condensate is $c=1/\sqrt{2}$. In the absence of vortices, the exact solution of (\ref{gp}) for the stationary state of the semi-infinite condensate is \begin{equation} f(y)=\tanh(y/\sqrt{2}). \label{f} \end{equation} In classical inviscid fluid dynamics with constant mass density $\rho$, the relevant kinematic boundary condition at a solid wall with normal vector ${\bf n}$ is \begin{equation} \rho \,{\bf u}\cdot {\bf n} =0, \label{rhou} \end{equation} where $\bf u$ is the velocity of the fluid. The corresponding problem of a vortex moving parallel to the wall is solved by placing one or more image vortices in such a way that condition (\ref{rhou}) is identically satisfied. For the dynamics described by the GP equation (\ref{gp}), the density $\rho \propto |\psi|^2$ is no longer constant, but rather vanishes at the surface of the wall. Thus condition (\ref{rhou}) is automatically satisfied, and all components of ${\bf u}$ can in principle remain finite on the boundary. Therefore, it may seem that image vortices are irrelevant in the case of the GP equation, so that the vortex should remain stationary away from the boundary (where the fluid density is constant apart from exponentially small corrections). Our numerical simulations show that this is not true. In fact, the vortex moves parallel to the boundary, and it moves {\it faster} than a corresponding pair of vortices of opposite circulation in a uniform condensate in the absence of the depletion caused by the boundary. The purpose of our paper is to study this motion in detail. The paper is organized as follows. In Sec.\ II we find the family of disturbances moving with a constant velocity along the solid wall by numerically solving the Gross-Pitaevskii (GP) equation in the frame of reference moving with the disturbance. In Sec.\ III a time-dependent Lagrangian variational analysis is used to find the first two leading terms in the equation of the vortex motion in the limit of large distance from the wall. In Sec.\ IV an alternative approach based on the dependence of total energy and momentum on the vortex position is used to determine the vortex velocity. In Sec.\ V we summarize our main findings. \section*{II. Numerical solutions} In what follows we seek solitary-wave solutions of Eq.\ (\ref{gp}) that preserve their form as they move parallel to the wall with fixed velocity $U$. For each value of the velocity $U$, we have $$\psi(x,y,t)=\psi(\eta,y),$$ where $\eta=x-Ut$. The GP equation (\ref{gp}) becomes \begin{equation} 2 {\rm i} U\psi_{x} = \nabla^2 \psi + (1 - |\psi|^2)\psi, \label{ugp} \end{equation} where we set $x=\eta$. In the absence of the wall, the solitary-wave solutions of Eq.\ (\ref{ugp}) were found by Jones and Roberts \cite{jr4}. For each value of $U$, there is a well-defined momentum $p$ and energy $E$, given by \begin{eqnarray} p &=&\textstyle{\frac{1}{2{\rm i}}} \int\left[(\psi^*-1)\partial_x\psi -(\psi-1)\partial_x\psi^*\right]\,dxdy\,,\label{pdef}\\ E &=& {\textstyle{\frac{1}{2}}}\int|\bm\nabla\psi|^2\,dxdy\ +\ {\textstyle{\frac{1}{4}}}\int(1-|\psi|^2)^2dxdy\,.\label{Edef} \end{eqnarray} In a momentum-energy $pE$ plot, the family of such solitary-wave solutions consists of a single branch that terminates at $p=0$ and $E=0$ as $U\rightarrow c$ (we call this curve the ``JR dispersion curve''). For small $U$ and large $p$ and $E$, the solutions are asymptotic to pairs of vortices of opposite circulation. As $p$ and $E$ decrease from infinity, the solutions begin to lose their similarity to vortex pairs. Eventually, for a velocity $U\approx 0.43$ (momentum $p_0\approx 7.7$ and energy $E_0\approx 13.0$) they lose their vorticity ($\psi$ loses its zero), and thereafter the solutions may better be described as ``rarefaction waves'' that can be thought of as finite amplitude sound waves. The velocity of the vortex pair in the absence of the boundary is plotted as a function of the position of the vortices $\pm y_0$ shown in Fig.~\ref{jr}. The dashed line gives the asymptotic velocity valid for large $y_0$ as $U=(2 y_0)^{-1}$. \begin{figure}[t] \centering \caption{\baselineskip=10pt \footnotesize [Color online] Graphs of the velocity of the vortex $U$ versus the vortex position $y_0$ as calculated via numerical integration of (\ref{ugp}) subject to the boundary conditions $\psi \rightarrow 1$ as $x^2+y^2 \rightarrow \infty$ without the wall (solid line) and the asymptotics given by $U=(2 y_0)^{-1}$ (dashed line). } \bigskip \epsfig{figure=ujr.eps,height=2in} \begin{picture}(0,0)(10,10) \put(0,13) {$y_0$} \put(-225,140) {$U$} \end{picture} \label{jr} \end{figure} In analogy with these results, we used numerical methods to find the complete family of solitary-wave solutions of (\ref{ugp}) subject to the hard-wall boundary condition (\ref{boundary}). Specifically, we mapped the semi-infinite domain onto the box $(-\frac{\pi}{2},\frac{\pi}{2})\times(0,\frac{\pi}{2})$ using the transformation $ \widehat x=\tan^{-1}(D x) $ and $ \widehat y=\tan^{-1}(D y),$ where $D\sim 0.4-1.5$. The transformed equations were expressed in a second-order finite-difference form using $200^2$ grid points, and the resulting nonlinear equations were solved by the Newton-Raphson iteration procedure, using a banded matrix linear solver based on the bi-conjugate gradient stabilised iterative method with preconditioning. Similar to \cite{jr4}, we are interested in finding the dispersion curve for our solutions in the $pE$ plane. The energy and impulse of each solitary wave are defined by the expressions (\ref{pdef})-(\ref{Edef}) appropriately modified for the ``ground state'' given by $f(y)$: \begin{eqnarray} p &=& \textstyle{\frac{1}{2{\rm i}}}\int\left[(\psi^*-f(y))\partial_x\psi -(\psi-f(y))\partial_x\psi^*\right]\,dxdy\,,\label{pdef2}\\ E &=& \textstyle{{\frac{1}{2}}}\int\left[|\bm \nabla\psi|^2\ + \textstyle{{\frac{1}{2}}}(1-|\psi|^2)^2-{\rm sech}^4(y/\sqrt{2})\right]\, dxdy\,.\label{Edef2} \end{eqnarray} In Fig.\ \ref{pemason}, we show the resulting solutions in the $pE$ plane. The plot of the velocity dependence on the vortex position is given below in Fig.\ \ref{umason} in Sec.~IV. All our vortex solutions with a rigid wall (those with a node in the fluid's interior) move with velocities less than $U=0.47$. For $U>0.47$, the zero of the wave function occurs on the wall only, and the solitary waves resemble rarefaction pulses of the JR dispersion curve away from the wall. \begin{figure} \centering \caption{\baselineskip=10pt \footnotesize [Color online] The momentum-energy curve of the solitary wave solutions of Eq. (\ref{ugp}) with the solid wall (solid line) and the JR dispersion curve (dashed line) that has no solid wall. For the solid-wall boundary condition, the vortex solutions with nonzero vorticity and nodes are shown in black and the vorticity-free solutions in grey (green). } \bigskip \epsfig{figure=pemason.eps,height=2in} \begin{picture}(0,0)(10,10) \put(0,13) {$p$} \put(-225,150) {$E$} \end{picture} \label{pemason} \end{figure} \section*{III. Variational approach} The time-dependent variational Lagrangian method offers a convenient analytical approach to estimate the vortex velocity for large $y_0$. The dimensionless GP equation is the Euler-Lagrange equation for the time-dependent Lagrangian functional \begin{equation}\label{lagrange} {\cal L} = {\cal T} - {\cal E} \equiv \textstyle{\frac{1}{2}} {\rm i} \int \left(\psi^* \psi_t - \psi_t^* \psi \right)dxdy - \textstyle{\frac{1}{2}} \int \left(|\bm \nabla\psi|^2 + \textstyle{\frac{1}{2}}|\psi|^4\right)dxdy \end{equation} where the time-dependent terms constititute the ``kinetic energy'' $\cal T$ and the remaining terms are the GP energy functional $\cal E$. We assume a trial function that depends on one or more parameters, and use this trial function to evaluate the Lagrangian $\cal L$ in Eq.~(\ref{lagrange}), which will depend on the parameters and their first time derivatives. The resulting Euler-Lagrange equations determine the dynamical evolution of the parameters. For the present problem of a vortex moving parallel to a rigid boundary with the boundary condition (\ref{boundary}), the vortex coordinates ($x_0,y_0$) serve as the appropriate parameters, where $-\infty<x_0<\infty$ and $0<y_0<\infty$. When the condensate contains a vortex at a distance $y_0$ from the boundary, the original condensate wave function (\ref{f}) acquires both a phase $S({\bf r, \bf r}_0)$ and a modulation near the center of the vortex, where the density vanishes. To model this behavior for the half space, it is preferable to include an image vortex at the image position $( x_0, -y_0)$. In this case, the approximate variational contribution to the phase is \begin{equation}\label{S} S({\bf r}, {\bf r}_0) = \arctan\left(\frac{y-y_0}{x-x_0}\right) - \arctan\left(\frac{y+y_0}{x-x_0}\right), \end{equation} where the second term reflects the negative image vortex. The derivation of the Gross-Pitaevskii equation involves an integration by parts of the kinetic energy density $|\bm \nabla \psi|^2 $ to yield $-\psi^*\nabla^2 \psi$ plus a surface term proportional to $ \psi^* \partial_y\psi$ and this is one rationale for including the image. Strictly speaking, this term vanishes because $f(0) $ does so, but the image vortex ensures that the contribution vanishes even in the case of a uniform condensate. The image vortex also cuts off the long-range tail of the velocity, giving a convergent kinetic energy even for a semi-infinite condensate. It thus seems more physical to include the image in this particular geometry, even though the image is often omitted for the highly nonuniform density obtained in the Thomas-Fermi limit for a trapped condensate~\cite{anglin,al,Kim04,Kim05}. In addition, the vortex affects the density near its core, which is modeled by a factor $v(|{\bf r} - {\bf r}_0|)$, where $v(r)$ vanishes linearly for small $r=\sqrt{x^2 + y^2}$ and $v(r) \to 1$ for $r \gg1$~\cite{fe68,al}. In principle, the function $v(r)$ can be taken as the exact radial solution of the Gross-Pitaevskii equation in an unbounded condensate, but this choice requires numerical analysis, and it is often preferable to use a variational approximation. A particularly simple choice is~\cite{Fisc03} \begin{equation}\label{core2} v(r) = \begin{cases} r/\lambda & \text{for $r\le \lambda$};\\ 1 &\text{for $r \ge \lambda$}, \end{cases} \end{equation} where $\lambda$ is the effective vortex core size; a variational analysis yields the optimal value $\lambda = \sqrt 6$. With these various approximations, the variational trial function is~\cite{al} \begin{equation} \psi({\bf r}, {\bf r}_0,t) = \,e^{iS({\bf r},{\bf r}_0)}\,f(y)\,v(|({\bf r}-{\bf r}_0|). \end{equation} The time-dependent part of the functional in Eq.~(\ref{lagrange}) becomes \begin{eqnarray} {\cal T} & = &-\displaystyle{\int \left[f(y)\right]^2\partial_tS\, \left|v(|{\bf r} - {\bf r}_0|)\right|^2}\, dxdy \approx - \int \left[f(y)\right]^2\partial_tS \, dxdy, \end{eqnarray} where the last approximation omits the effect of the vortex on the density, replacing $|v|^2$ by 1 throughout the condensate. A straightforward analysis then yields \begin{equation}\label{calT} {\cal T} \approx 2\pi \,\dot{x}_0 \int_0^{y_0} \left[f(y) \right]^2\, dy, \end{equation} where $\dot{x}_0$ is the velocity $U$ of the vortex parallel to the wall. The contribution to $\cal T$ from the vortex core yields a term that is smaller than Eq.~(\ref{calT}) by a factor of relative order $y_0^{-2}$, which is negligible relative to the leading correction of order $y_0^{-1}$ that we retain here. Since the energy functional will turn out to depend only on the single coordinate $y_0$, the Euler-Lagrange equation for $x_0$ implies that $y_0$ remains constant (as expected from energy considerations). In contrast, the equation for $y_0$ reduces to \begin{equation} \frac{d}{dt}\,\frac{\partial {\cal L}}{\partial \dot{y}_0} = \frac{\partial {\cal L}}{\partial y_0} = 0, \end{equation} since $\dot{y}_0$ does not appear in ${\cal T}$ (and hence in $\cal L$). Thus the dynamical motion of the vortex is given by \begin{equation}\label{dynamics} \dot{x}_0 \approx \frac{1}{2\pi\left[ f(y_0)\right]^2} \,\frac{\partial {\cal E}}{\partial y_0}. \end{equation} It is evident that only the derivative $\partial {\cal E}/\partial y_0$ is relevant, so that several terms in $\cal E$ play no role in the present analysis. For example, the derivative of the interaction energy ${\cal E}_{\rm int}(y_0) = \frac{1}{4} \int |\psi|^4\, dxdy$ vanishes exponentially for $y_0\gg 1$ and does not affect the dynamics of the vortex for large $y_0 $. Similarly, the kinetic energy in Eq.~(\ref{lagrange}) separates into two parts, arising from the density variation and the flow energy, respectively; the contribution from the density variation is also negligible for $y_0\gg 1$. The remaining (dominant) kinetic energy ${\cal E}_{kv} = \frac{1}{2}\int |\bm \nabla S|^2\,|\psi|^2\, dxdy$, arises from the vortex flow. The squared velocity now follows from Eq.~(\ref{S}) \begin{equation} |\bm \nabla S|^2 = \frac{y_0}{y}\left[ \frac{1}{(x-x_0)^2 + (y-y_0)^2} - \frac{1}{(x-x_0)^2+(y+y_0)^2}\right]. \end{equation} The resulting flow-induced kinetic energy is \begin{equation} {\cal E}_{kv} = \frac{1}{2} \int f(y)^2 \,v(|{\bf r-\bf r}_0|)^2 \,|\bm \nabla S|^2\, dxdy. \end{equation} It is convenient to divide this integral up into three (strip-shaped) regions \begin{eqnarray} \text{region I}: & 0\le y\le y_0-\lambda & \text{ (note that $v = 1 $ in I)}\\ \text{region II}: & y_0 -\lambda \le y\le y_0+\lambda& \\ \text{region III}: & y_0+\lambda \le y\le \infty & \text{(note that $v = 1 $ in III)} \end{eqnarray} In region II, inside the vortex core $|{\bf r - \bf r}_0|\le \lambda$, the integrals can be found approximately in cylindrical coordinates and yield ${\cal E}_{kvc}\approx (\pi/2)f(y_0)^2,$ neglecting terms of order $\lambda ^2/y_0^2$. The remaining region of the strip II outside the core simplifies because $v = 1$. It is convenient to parametrize $y = y_0 +\lambda \sin\theta$ by an angle $\theta$ that runs from $-\pi/2$ to $\pi/2$. In this region, the symmetry in $x$ allows us to consider only $x\ge 0$, and the lower limit for $x$ is $x_m(\theta) = \lambda \cos\theta$. The relevant integral is \begin{eqnarray} & &{ \displaystyle\int_{x_m(\theta)}^\infty \left[ \frac{1}{x^2 + (y-y_0)^2} - \frac{1}{x^2+(y+y_0)^2}\right]}\, dx \nonumber \\ &\qquad=& {\displaystyle\frac{1}{|y-y_0|}\left[\frac{\pi}{2}-\arctan\left(\frac{x_m(\theta)}{|y-y_0 |}\right)\right] - \frac{1}{y+y_0}\left[\frac{\pi}{2}-\arctan\left(\frac{x_m(\theta)}{y+y_0 }\right) \right]}\nonumber \\ &\qquad \approx& {\displaystyle \frac{|\theta|}{\lambda |\sin\theta|} -\frac{\pi}{4y_0}}. \end{eqnarray} The total answer for region II is \begin{equation} {\cal E}_{kv\rm II} \approx \pi f(y_0)^2 \left[\frac{1}{2} +\ln 2 -\frac{\lambda }{2y_0} + \cdots\right], \end{equation} where $\ln 2$ arises from the definite integral $\int_{-\pi/2}^{\pi/2} |\theta|/|\tan\theta|\,d\theta = \pi \ln 2$. In regions I and III, the integrals can be found in cartesian coordinates, integrating over $x$ first. Each of these gives two contributions; one is simply a combination of logarithms obtained by replacing $f^2$ by $1$ and the other from the remainder with $-(1-f^2) = -{\rm sech}^2(y/\sqrt{2})$. \begin{eqnarray}\label{I} {\cal E}_{kv\rm I} &=&\frac{\pi}{2}\,\ln\left(\frac{2y_0-\lambda}{\lambda}\right) - \frac{\pi}{2}\int_0^{y_0-\lambda} {\rm sech}^2(y/\sqrt{2})\,\left(\frac{1}{y_0-y}+ \frac{1}{y_0 + y}\right)\, dy\\ \label{III}{\cal E}_{kv\rm III} &=&\frac{\pi}{2}\,\left[ 2\ln\left(\frac{y_0+\lambda}{\lambda}\right) - \ln\left(\frac{2y_0+\lambda}{\lambda}\right) \right] \nonumber \\ & &- \frac{\pi}{2}\int_{y_0+\lambda}^\infty {\rm sech}^2(y/\sqrt{2})\,\left(\frac{1}{y-y_0}+ \frac{1}{y + y_0}-\frac{2}{y}\right)\, dy. \end{eqnarray} To evaluate $\partial {\cal E}_{kv\rm I}/\partial y_0$ and $\partial {\cal E}_{kv\rm III}/\partial y_0$, we first differentiate the expressions in Eqs.~(\ref{I}) and (\ref{III}), expand the integrands in the powers of $1/y_0$ through $O(y_0^{-2})$ and integrate to get \begin{eqnarray} \frac{\partial{\cal E}_{kv\rm I}}{\partial y_0}&\approx& \frac{\pi}{2 y_0-\lambda} + \frac{\sqrt{2}\pi}{y_0^2}{\rm tanh}\left(\frac{y_0-\lambda}{\sqrt{2}}\right),\label{7a} \\ \frac{\partial{\cal E}_{kv\rm III}}{\partial y_0}&\approx& \frac{\pi y_0}{(\lambda + y_0)(\lambda +2 y_0)}. \label{8b} \end{eqnarray} A combination of these contributions gives the vortex velocity in Eq.~(\ref{dynamics}) as \begin{eqnarray} U=\dot x_0&\approx&\frac{1}{2}{\rm coth}^2\frac{y_0}{\sqrt{2}}\biggl[\frac{1}{2y_0-\lambda} + \frac{y_0}{(y_0+\lambda)(2y_0+\lambda)}\nonumber \\ &+& \sqrt{2}\biggl(\frac{1}{2}-\frac{\lambda}{2 y_0} + \ln 2\biggr){\rm sech}^2\bigl(\frac{y_0}{\sqrt{2}}\bigr){\rm tanh}\bigl(\frac{y_0}{\sqrt{2}}\bigr) \nonumber \\ &+&\frac{\lambda}{ 2 y_0^2}\,{\rm tanh}^2\left(\frac{y_0}{\sqrt{2}}\right)+ \frac{\sqrt{2}}{y_0^2}\,{\rm tanh}\left(\frac{y_0-\lambda}{\sqrt{2}}\right)\biggr]. \label{u1} \end{eqnarray} This further simplifies to \begin{equation} U\approx\frac{1}{2}\biggl(\frac{1}{2y_0-\lambda} + \frac{y_0}{(y_0+\lambda)(2y_0+\lambda)}+\frac{\lambda+2\sqrt{2}}{2y_0^2}\biggr), \label{u2} \end{equation} after we approximate the hyperbolic functions by their large $y_0$ behavior. When we neglect terms of order $1/y_0^3$, the expression (\ref{u2}) finally reduces to \begin{equation} U\approx \frac{1}{2 y_0} \biggl(1 + \frac{\sqrt{2}}{y_0}\biggr), \label{u3} \end{equation} independent of $\lambda$. \section*{IV. Vortex velocity through the Hamiltonian relationship between energy and impulse} In this section we present a different approach to the asymptotics for the vortex velocity based on the relationship between energy and momentum of the vortex pair. We compare the motion of a pair of vortices of opposite circulation (JR solutions) that satisfy \begin{equation} 2 i U_1 \psi_{1x} = \nabla^2 \psi_1 + (1 - |\psi_1|^2)\psi_2, \qquad |\psi_1|\rightarrow 1 \qquad |{\bf x}|\rightarrow \infty, \label{gp2} \end{equation} with the motion of a vortex next to the solid wall \begin{equation} 2 i U_2 \psi_{2x} = \nabla^2 \psi_2 + (1 - |\psi_2|^2)\psi_2, \qquad |\psi_2|\rightarrow |\tanh(y/\sqrt{2})| \qquad |{\bf x}|\rightarrow \infty. \label{gp3} \end{equation} For the asymptotics, we are interested in the solutions for small $U_i$, for $i=1,2$, that correspond to a pair of vortices of opposite circulation. We calculate the following quantities: the position of the pair $(0,\pm y_0)$, the energy and impulse given by (\ref{Edef})-(\ref{pdef}) for $i=1$ and by (\ref{Edef2})-(\ref{pdef2}) for $i=2$, so that \begin{equation} U_i=\frac{\partial E_i}{\partial p_i}. \label{U} \end{equation} These expressions for $i=1$ were derived in \cite{jr4,jr5}; similar arguments immediately lead to the expressions for $i=2$. Note that $U_i, E_i $ and $p_i$ are functions of $y_0$ and if $y_0 \gg 1$, \begin{equation} E_1=2 \pi \log(2 y_0), \qquad p_1=4 \pi y_0, \label{ep} \end{equation} (see, for instance, \cite{pitaevskii}). From (\ref{U}) we have \begin{equation} U_1=\frac{\partial E_1/\partial y_0}{\partial p_1/\partial y_0}=\frac{1}{2 y_0}, \label{uu} \end{equation} as expected. For large $y_0$ an accurate approximation to the solution of (\ref{gp2}) for the uniform flow was found \cite{b04} as $\psi_1=u_1(x,y) + i v_1(x,y)$ where \begin{eqnarray} u_1(x,y)&=&(x^2+y^2-y_0^2)\tilde R(\sqrt{x^2+(y-y_0)^2})\tilde R(\sqrt{x^2+(y+y_0)^2}),\nonumber\\ v_1(x,y)&=&-2 x y_0 \tilde R(\sqrt{x^2+(y-y_0)^2})\tilde R(\sqrt{x^2+(y+y_0)^2}), \label{uv0} \end{eqnarray} where $\tilde R(r)=\sqrt{ (0.3437 + 0.0286 r^2)/(1 + 0.3333 r^2 + 0.0286 r^4)}.$ Another accurate choice is $\tilde R(r)=(r^2 + 2)^{-1/2}$. Similarly, we expect that $\psi_2$ is accurately approximated by $\psi_2 =\psi_1 |\tanh(y/\sqrt{2})|$. The question we pose is: {\it What is the position of the vortex $(0,y_0)$ moving parallel to the solid wall with the same velocity as the vortex pair at $(0,y_0-l)$ in the uniform flow?} Thus we seek the solution of \begin{equation} U_1(y_0-l(y_0)) = U_2(y_0) = \frac{\partial E_2/\partial y_0}{\partial p_2/\partial y_0}, \label{eq1} \end{equation} where we explicitly indicate the dependence of $U_1$ and $U_2$ on the vortex position. Since $U_1(y_0-l)=(2(y_0-l))^{-1}$, we obtain the expression for the shift in the vortex position, $l$, in the presence of the wall as \begin{equation} l(y_0)=y_0 - \frac{1}{2}\frac{\partial p_2/\partial y_0}{\partial E_2/\partial y_0}. \label{eq2} \end{equation} We rearrange the right-hand side of (\ref{eq2}) and use (\ref{ep}) to obtain the final equation that determines $l(y_0)$: \begin{equation} l(y_0)=y_0 - \frac{1}{2}\frac{4\pi +\widetilde{dp}}{2 \pi/y_0 + \widetilde{dE}}, \label{main} \end{equation} where $\widetilde{dE} = \partial(E_2-E_1)/\partial y_0$ and $\widetilde{dp} = \partial(p_2-p_1)/\partial y_0$ in the integral form given by (\ref{Edef})-(\ref{pdef}) for $E_1$ and $p_1$ and (\ref{Edef2})-(\ref{pdef2}) for $E_2$ and $p_2$. In evaluating the contribution $E_2-E_1$ only the kinetic terms involving derivatives with respect to $x$ were kept. The integrals $\widetilde{dE}$ and $\widetilde{dp}$ are exactly integrable in $x$ with the use of {\it Mathematica}, in which the leading order terms in $1/y_0$ are given by \begin{eqnarray} \widetilde{dp} &=& -\frac{8 \pi}{y_0^3}\int_{-\infty}^{\infty} {\rm sech}^2(y/\sqrt{2})\, dy + O(y_0^{-5}),\nonumber \\ \widetilde {dE} &=&\biggl(\frac{ \pi}{y_0^2}+\frac{\pi (\pi^2-18)}{2y_0^4}\biggr)\int_{-\infty}^{\infty} {\rm sech}^2(y/\sqrt{2})\, dy + O(y_0^{-6}). \label{sol} \end{eqnarray} With $\int_{-\infty}^{\infty} {\rm sech}^2(y/\sqrt{2})\, dy=2\sqrt{2}$ we finally arrive at \begin{equation} l(y_0)= \frac{\sqrt{2} y_0 (\pi^2 + 2(y_0^2-5))}{\sqrt{2} \pi^2 + 2 (y_0^3 + \sqrt{2}y_0^2 - 9 \sqrt{2})} = \sqrt{2}+O(y_0^{-1}). \label{ll} \end{equation} The vortex next to the wall moves with the velocity \begin{equation} U_2 =\frac{1}{2(y_0-\sqrt{2})}, \label{ufinal} \end{equation} which is the main result of our asymptotics. Note that if we expand (\ref{ufinal}) in a Taylor series we get $U_2=\left (2 y_0\right)^{-1}\left( 1 + \sqrt{2}/ y_0\right)$, which agrees with the result of Section III. Fig.~\ref{umason} gives the plot of the vortex velocity $U$ as a function of the distance of the vortex from the wall $y_0$ for the numerical solutions found in Sec.~II, asymptotics found in Sec.~III [see Eq.(\ref{u3})] and asymptotics (\ref{ufinal}). \begin{figure}[t!] \centering \caption{\baselineskip=10pt \footnotesize [Color online] Graphs of the vortex velocity $U$ versus its distance from the wall $y_0$ as calculated via numerical integration of (\ref{gp3}) (black solid line) and the asymptotics given by (\ref{u3}) (red short-dashed line) and by (\ref{ufinal}) (green solid line). Also shown is the velocity of the vortex calculated by numerically integrating the right-hand side of (\ref{eq1}) (blue long-dashed line). As one can see the simplifications made to derive (\ref{ufinal}) are consistent with the full expression (\ref{eq1}) for $y_0>4$. } \bigskip \epsfig{figure=umasonFinal.eps,height=2in} \begin{picture}(0,0)(10,10) \put(0,13) {$y_0$} \put(-225,150) {$U$} \end{picture} \label{umason} \end{figure} \section*{V. Discussion and Conclusions} In a uniform superfluid with a solid boundary, the motion of a quantized vortex arises from the image that enforces the condition of zero normal flow at the wall. In a trapped condensate, however, the image is generally omitted. Instead, the motion can be considered to arise from the gradient of the trap potential, which is the same as the gradient in the density in the Thomas-Fermi limit \cite{fetterRev}. This behavior is especially clear for a single vortex at a distance $r_0$ from the center of a cylindrical container of radius $R$ \cite{Kim04}. For a classical incompressible fluid, the vortex precesses at an angular velocity \begin{equation}\label{cl} \left. \dot\phi\right|_{\rm cl} = \frac{\hbar}{m}\,\frac{1}{R^2 - r_0^2} \end{equation} because of the image vortex at $R^2/r_0$. In contrast, the precession rate in a trapped cylindrical condensate in the Thomas-Fermi limit \begin{equation}\label{TF} \left.\dot\phi\right|_{\rm TF} \approx \frac{\hbar}{m}\,\frac{\ln(R/\xi)}{R^2 - r_0^2} \end{equation} is larger than (\ref{cl}) because of the (typically large) logarithmic factor. Although the denominators of (\ref{cl}) and (\ref{TF}) both vary quadratically with $r_0$, the first result arises from the image and the second from the parabolic trap potential (and thus the parabolic density). If an image were included in the analysis of the trap, it would add a correction of order 1 to the large logarithm $\ln(R/\xi)$; such a term is comparable to other terms that are usually omitted. As an intermediate situation between these two extremes, the present paper has analysed the dynamics of a vortex in a half space bounded by a solid wall on which the density of condensate vanishes. This geometry represents the simplest problem of a vortex in a condensate interacting with a surface. Since the gradient of the density vanishes exponentially for $y_0 \gg \xi$, only the image remains to drive the motion in the asymptotic region. Our geometry allows us to separate the effect of the surface from the effect of the density gradient, both of which appear in the more complicated problem of an inhomogeneous trapped condensate \cite{anglin}. We found the complete family of solitary-wave solutions moving with subcritical velocities parallel to the wall. In addition, both a variational analysis and the Hamiltonian relationship between energy and momentum were used to give the velocity of the vortex as a function of its distance from the wall. These results are identical through to the first correction term, where the small parameter is the inverse distance from the wall. Our main results are (i) that the vortex moves as if there was an image vortex on the other side of the wall, which essentially replaces the boundary condition (\ref{rhou}) with a more stringent requirement ${\bf u}\cdot {\bf n} = 0$ and (ii) that the depleted surface layer induces an effective shift in the position of the image in comparison with the case of the uniform flow. Specifically, the velocity of the vortex can be approximated by \begin{equation} U\approx\frac{ \hbar}{2m(y_0 - \sqrt{2}\xi)}, \end{equation} where $y_0$ is the distance from the center of the vortex to the wall, $\xi$ is the healing length of the condensate and $m$ is the mass of the boson. \section*{VI. Acknowledgements} NGB gratefully acknowledges the support from EPSRC. NGB and ALF thank the organisers of the workshop on Ultracold atoms held at the Aspen Center for Physics in June 2005, where this work was started, and Eugene Zaremba for a useful discussion during the workshop. This work continued at the Warwick workshop on Universal features in turbulence: from quantum to cosmological scales (December 2005); we thank S. Nazarenko and the other organizers for their hospitality.
cond-mat/0605350
\section{Introduction} \label{sec:intro} In recent years, ultracold atomic systems have served as a controlled and tunable toolbox for studying many-body quantum phenomena. The continuous tunability of the interaction by Feshbach resonances makes these systems ideal candidates to study the crossover from momentum space pairing in the Bardeen--Cooper--Schrieffer (BCS) theory to Bose-Einstein condensate (BEC) of fermions bound into molecules. This BCS-BEC crossover has been one of the most studied problems in recent experiments in both magnetic or optical traps \cite{BEC-BCS-expt,Hulet-PRL2005,Grimm-in-situ,Hulet-Science2006,expt-univ-energy,expt-ramp} and optical lattices \cite{opt-latt-expt}. Tuning across the Feshbach resonance, one traverses the whole range of the gas parameter $k_F a_s$, where $k_F$ is the Fermi momentum and $a_s$ is the s-wave scattering length. The regime of $k_F |a_s| \ll 1$ (negative $a_s$) is described by Bardeen--Cooper--Schrieffer (BCS) theory. At $k_F a_s \ll 1$ (positive $a_s$) fermions pair into bosonic molecules and form a Bose-Einstein condensate (BEC). At the microscopic scale, the BCS and BEC regimes are radically different; however, the macroscopic, and, in particular, critical behaviour is supposed to be qualitatively the same for the whole range of $k_F a_s$: the system undergoes the superfluid (SF) phase transition at a certain critical temperature. Separating the BCS and BEC extremes is the so-called unitarity point $(k_F a_s)^{-1} \to 0$. It is worth noting that the unitarity regime is approximately realized in the inner crust of the neutron stars, where the neutron-neutron scattering length is nearly an order of magnitude larger than the mean interparticle separation \cite{neutron-star}. The Fermi gas at unitarity is a peculiar case of a strongly interacting system with no interaction-related energy scale: the divergent scattering length and any related energy scale drop out completely. This gives rise to universality of the dilute gas properties, in the sense that the only relevant energy scale left in the system is given by the density, $n$. Because of this universality one obtains a unified description of such diverse systems as cold atoms in magnetic or optical traps, Fermi-Hubbard model in optical lattices and inner crusts of neutron stars. The theoretical description of the Fermi gas in the BCS-BEC crossover regime is a major challenge, since the system features no small parameter on which one could build a theory in a rigorous way. The original analytical treatments were confined to zero temperature and were based on the extension of the BCS-type many-body wave function \cite{classiki}. Most of the subsequent elaborations are also of mean-field type (with or without remedies for the effects of fluctuations) \cite{NSR,Haussmann94,Randeria95,Holland-Timmermans,Ohashi-Griffin,Perali,LiuHu}. The accuracy and reliability of such approximations is questionable since they inevitably involve an uncontrollable approximation. Numerical investigations of the unitary Fermi gases are hampered by the sign problem, inherent to any Monte Carlo (MC) simulations of fermion systems \cite{BinderLandau-book,TroyerWiese}. One way of avoiding the sign problem at the expense of a systematic error is the fixed-node Monte Carlo framework, which has been used to study the ground state \cite{Giorgini-Carlson}. The systematic error of the fixed-node Monte Carlo depends on the quality of the variational ansatz for the nodal structure of the many-body wave function and is not known precisely. Only in a few exceptional cases can the sign problem be avoided without incurring systematic errors. One of such cases is given by fermions with attractive contact interaction, for which a number of sign-problem-free schemes has been introduced \cite{Hirsch,Rombouts,Rubtsov,Kaplan}. Fortunately, this system can be tuned to the unitarity regime. Still, despite a number of calculations at finite temperature \cite{Bulgac,Wingate,Lee-Schaefer}, an accurate description of the finite-temperature properties of the unitary Fermi gas is missing. In the present paper, we simulate the Fermi-Hubbard model in the unitary regime by means of a determinant diagrammatic Monte Carlo method. By studying the dilute limit of the model, we extract properties of the homogeneous continuum Fermi gas. A brief summary of the main results has been given in Ref.~\cite{we-short}. Here we provide a detailed description of the Monte Carlo scheme and methods of data analysis. We also report new results relevant to experimental realizations of the Fermi-Hubbard model in optical lattices and trapped Fermi gases. The Fermi-Hubbard model is defined by the Hamiltonian \numparts \begin{equation} H = H_0 + H_1, \\ \label{AHM:1} \end{equation} \begin{equation} H_0 = \sum_{\mathbf{k}\sigma} \left( \epsilon_\mathbf{k} - \mu \right) c^{\dagger}_{\mathbf{k}\sigma}c_{\mathbf{k}\sigma},\label{AHM:2}\\ \end{equation} \begin{equation} H_1 = U \sum_\mathbf{x} n_{\mathbf{x}\uparrow} n_{\mathbf{x}\downarrow}\; . \label{AHM:3} \end{equation} \endnumparts Here $c^{\dagger}_{\mathbf{k}\sigma}$ is a fermion creation operator, $n_{\mathbf{x}\sigma} = c^{\dagger}_{\mathbf{x}\sigma} c_{\mathbf{x}\sigma}$, $\sigma = \uparrow, \downarrow$ is the spin index, $\mathbf{x}$ enumerates $L^3$ sites of the three-dimensional (3D) simple cubic lattice with periodic boundary conditions, the quasimomentum $\mathbf{k}$ spans the corresponding Brillouin zone, $\epsilon_\mathbf{k}=2t \sum_{\alpha=1}^{3}(1-\cos k_\alpha a)$ is the single-particle tight-binding spectrum, $a$ and $t$ are the lattice spacing and the hopping amplitude, respectively, $\mu$ stands for the chemical potential and $U<0$ is the on-site attraction. Without loss of generality we henceforth set $a$ and $t$ equal to unity; the effective mass at the bottom of the band is then $m=1/2$. In Sec.\ \ref{sec:2body} we will study the two-body problem at zero temperature, show how the Hubbard model can be used to study the continuum unitary gas and investigate the functional structure of lattice corrections to the continuum behaviour. In Sec.\ \ref{sec:ddmc} we discuss the finite-temperature diagrammatic expansion for the Hubbard model (Sec. \ref{ssec:matsubara}), and give a qualitative description of the Monte Carlo procedure to sum the diagrammatic series (Sec.\ref{ssec:MC}), with details of the updating procedures given in Appendix \ref{sec:updates}. In order to extract the thermodynamic limit properties from MC data, we use finite-size scaling analysis described in Sec.\ \ref{sec:sense}. Section\ \ref{sec:thermodynamics} gives an overview of the scaling functions describing thermodynamics of the unitary gas, and results are presented and discussed in Sec.\ \ref{sec:results}. \section{Two-body problem} \label{sec:2body} \begin{figure} \includegraphics[width=0.75\columnwidth,keepaspectratio=true]{figure1.eps} \caption{ Diagrammatic series for the vertex insertion $\Gamma(\xi,\mathbf{p})$ (heavy dot). Small dots represent the bare Hubbard interaction $U$, and lines are the single-particle propagators. } \label{fig:ladder} \end{figure} Consider a quantum mechanical problem of two fermions at zero temperature described by the Hamiltonian (\ref{AHM:1})--(\ref{AHM:3}) with $\mu=0$. The most straightforward way to tackle this problem is within the diagrammatic technique in the momentum-frequency representation \cite{Fetter-Walecka,IXtom}, which, in the present case, is built on four-point vertices, $U$, with two incoming (spin-$\uparrow$ and spin-$\downarrow$) and two outgoing (spin-$\uparrow$ and spin-$\downarrow$) ends, connected by single-particle propagators. The scattering of two particles is then described by a series of ladder diagrams \cite[\S16]{IXtom} shown in \fref{fig:ladder}. Ladder diagrams can be summed by introducing the vertex insertion $\Gamma(\xi,\mathbf{p})$, which depends on frequency $\xi$ and momentum $\mathbf{p}$. Since $\Gamma (0,0)$ is proportional to the scattering amplitude, the unitarity limit corresponds to $\Gamma(\xi\to 0,\mathbf{p}\to \mathbf{0})\to \infty$. The summation depicted in \fref{fig:ladder} leads to \begin{equation} \Gamma^{-1}(\xi,\mathbf{p}) = U^{-1} + \Pi(\xi,\mathbf{p})\; , \label{Dyson} \end{equation} where $\Pi(\xi,\mathbf{p})$ is the polarization operator (the integration is over the Brillouin zone): \begin{equation} \Pi(\xi,\mathbf{p}) = \int_{\mathrm{BZ}} \frac{\rmd\mathbf{k}}{(2\pi)^3} \frac{1}{\xi+\epsilon_{\mathbf{p}/2+\mathbf{k}}+\epsilon_{\mathbf{p}/2-\mathbf{k}}} \; . \label{polariz} \end{equation} It immediately follows from Eq.\ (\ref{Dyson}) and (\ref{polariz}) that the unitary limit corresponds to $U=U_*$, where \begin{equation} U^{-1}_* = -\Pi(0,\mathbf{0}) = -\int_\mathrm{BZ} \frac{\rmd\mathbf{k}}{(2\pi)^3} \, \frac{1}{2\epsilon_\mathbf{k}}\; . \label{U-unitary} \end{equation} A straightforward numeric integration yields $U_*\approx -7.915t$. In the limit of vanishing filling factor $\nu \to 0$ for the many-body problem of (\ref{AHM:1})--(\ref{AHM:3}) the typical values of $\xi $ and $p$ are related to the Fermi energy $\xi \sim \epsilon_F \sim \nu^{2/3}$ and Fermi momentum $p \sim k_F \sim \nu^{2/3}$ and are small compared to the bandwidth and reciprocal lattice vector, respectively. In zero-th order with respect to $\nu$, the lattice system is identical to the continuum one. Indeed, by combining (\ref{Dyson}), (\ref{polariz}) and (\ref{U-unitary}), we get \begin{equation} \Gamma^{-1}(\xi,\mathbf{p}) = \int_{\mathrm{BZ}} \frac{\rmd\mathbf{k}}{(2\pi)^3} \left[ \frac{1}{\xi+\epsilon_{\mathbf{p}/2+\mathbf{k}}+ \epsilon_{\mathbf{p}/2-\mathbf{k}}} - \frac{1}{2\epsilon_{\mathbf{k}}} \right]\; , \label{Gamma-latt} \end{equation} and observe that for small $\xi $ and $p$ one can replace $\varepsilon({\bf k})$ with $k^2$ and extend integration over $\rmd\mathbf{k}$ to the whole momentum space with the result \begin{equation} \Gamma^{-1}_\mathrm{cont}(\xi,\mathbf{p}) = -\frac{m^{3/2}}{4\pi}\sqrt{\xi+\frac{p^2}{4m}} \; . \label{Gamma-cont} \end{equation} With this form of $\Gamma_\mathrm{cont}$, and particle propagators based on the parabolic dispersion relation one recovers the continuum limit behaviour. Now we are in a position to treat the lattice corrections. The first correction should come from $\Gamma$, not from propagators, since only in $\Gamma$ large momenta play a special role due to resonance in the two-particle channel. In the lowest non-vanishing order in $\xi$ and $\mathbf{p}$ we have (the summation over repeating subscripts is implied) \begin{equation} \Gamma^{-1} \approx - \int_\mathrm{BZ} {\rmd{\bf k}\over (2\pi)^3} \, \frac{\xi + (1/4)(\partial^2 \varepsilon_\mathbf{k}/\partial k_i \partial k_j) p_i p_j } {4\varepsilon_\mathbf{k}^2} \; . \end{equation} with the difference between the lattice and continuous model given by \begin{equation} \Gamma^{-1} - \Gamma^{-1}_\mathrm{cont}\; \approx\; \frac{\xi}{4} A + \frac{p^2}{16} B\; , \end{equation} where \begin{eqnarray} A = \int {\rmd{\bf k}\over (2\pi)^3} \, {1 \over (k^2/2m)^2} - \int_\mathrm{BZ} {\rmd{\bf k}\over (2\pi)^3} \, {1\over \varepsilon_\mathbf{k}^2} \; ,\label{Gamma-A}\\ B = \int {\rmd{\bf k}\over (2\pi)^3} \, {1/m \over (k^2/2m)^2} - \int_\mathrm{BZ} {\rmd{\bf k}\over (2\pi)^3} \, {(\partial^2 \varepsilon_\mathbf{k}/\partial k_x \partial k_x)\over \varepsilon_\mathbf{k}^2}\; . \label{Gamma-B} \end{eqnarray} In the limit of $\xi\to 0$ and $p\to 0$, we have $\Gamma^{-1} \approx \Gamma^{-1}_\mathrm{cont} \sim k_F \sim \nu^{1/3}$, and $\Gamma^{-1}-\Gamma^{-1}_\mathrm{cont} \sim k_F^2 \sim \nu^{2/3} $. Hence, the leading lattice correction is of the form \begin{equation} \frac{\Gamma(\xi,\mathbf{p}) - \Gamma_\mathrm{cont}(\xi,\mathbf{p})}{\Gamma(\xi,\mathbf{p})} \; \sim\; \nu^{1/3}\; . \label{nu-13} \end{equation} Incidentally, Eqs. (\ref{Gamma-A}) and (\ref{Gamma-B}) hint into an intriguing possibility of completely suppressing the leading-order lattice correction by tuning the single-particle spectrum $\epsilon_\mathbf{k}$ so that $A=B=0$. We did not explore this possibility in the present study. \section{Determinant Diagrammatic Monte Carlo} \label{sec:ddmc} The diagrammatic technique employed in the previous section is not particularly convenient for numerical studies. In this section, we briefly review the Matsubara technique and then present a Monte Carlo scheme of summing the resultant diagrammatic series. \subsection{Rubtsov's representation} \label{ssec:matsubara} % To construct a diagrammatic expansion for the model (\ref{AHM:1})--(\ref{AHM:3}) we follow Refs.~\cite{Rubtsov,wePRB} and consider the statistical operator in the coordinate---imaginary time representation. In the interaction picture we get: \begin{equation} \exp(-\beta H)\; =\; \exp(-\beta H_0)\, \mathcal{T}_\tau \exp\left( - \int_0^\beta \, \rmd\tau H_1(\tau)\right)\; , \label{statOper} \end{equation} where $\beta$ is an inverse temperature, $H_1(\tau) = e^{\tau H_0} H_1 e^{-\tau H_0} $, and $\mathcal{T}_\tau$ stands for the imaginary time ordering. Expanding Eq.\ (\ref{statOper}) in powers of $H_1$, one obtains for the partition function: \begin{equation} \fl Z \; =\; \sum_{n=0}^\infty (-U)^n \sum_{\mathbf{x}_1 \dots \mathbf{x}_n}% \int_{0<\tau_1<\tau_2< \dots < \beta} \prod_{j=1}^{n} \rmd\tau_j % \Tr\left[ e^{-\beta H_0} \prod_{j=1}^{n}% c_\uparrow^\dagger(\mathbf{x}_j \tau_j) c_\uparrow(\mathbf{x}_j \tau_j)% c_\downarrow^\dagger(\mathbf{x}_j \tau_j) c_\downarrow(\mathbf{x}_j \tau_j)% \right]\; . \label{Z-diagr} \end{equation} Expansion (\ref{Z-diagr}) generates the standard set of Feynman diagrams. Graphically, the diagrams are similar to those of Sec.\ \ref{sec:2body}, and consist of the four-point vertices, $U$, connected by the single-particle propagators $G_\sigma^{(0)} (\mathbf{x}_i-\mathbf{x}_j, \tau_i-\tau_j; \mu,\beta) = - \Tr \left[\mathcal{T}_\tau e^{-\beta H_0} c_\sigma^\dagger(\mathbf{x}_i \tau_i)c_\sigma(\mathbf{x}_j \tau_j)% \right]$. The $p$-th order of the perturbation theory is then graphically given by a set of $(p!)^2$ possible interconnections of vertices by propagators, see \fref{fig:Z}. \begin{figure} \includegraphics[width=0.75\columnwidth,keepaspectratio=true]{figure2.eps} \caption{ Diagrammatic series for the partition function. Upper line is the graphical representation of the series (\ref{Z-diagr}), lower line depicts Eq.\ (\ref{Z-summed}). Diagram signs are shown explicitly.} \label{fig:Z} \end{figure} The diagrammatic expansion (\ref{Z-diagr}) is \emph{unsuitable} for the direct Monte Carlo simulation since it has a sign problem built in: different terms in the series have different signs --- a closed fermion loop brings in an extra minus sign \cite{Fetter-Walecka}. The trick is to consider all diagrams of a given order $p$ with the fixed vertex configuration \begin{equation} \mathcal{S}_p = \{ (\mathbf{x}_j,\tau_j),~~j=1,\dots,p \} \label{conf-p} \end{equation} as one. This implies summation over the $(p!)^2$ ways of connecting vertices by propagators. Upon summation, Eq.\ (\ref{Z-diagr}) takes on the form \cite{Rubtsov}: \begin{equation} Z = \sum_{p=0}^\infty (-U)^p \sum_{\mathcal{S}_p} \det \mathbf{A}^{\uparrow}(\mathcal{S}_p) \det \mathbf{A}^{\downarrow}(\mathcal{S}_p)\; , \label{Z-summed} \end{equation} where \begin{equation} \sum_{\mathcal{S}_p} \equiv \sum_{\mathbf{x}_1 \dots \mathbf{x}_p}% \int_{0<\tau_1<\tau_2< \dots < \tau_p < \beta} \prod_{j=1}^{p} \rmd\tau_j\; , \label{S} \end{equation} and $\mathbf{A}^{\sigma}(\mathcal{S}_p)$ are the $p \times p$ matrices built on the single-particle propagators: \begin{equation} A^{\sigma}_{ij}(\mathcal{S}_p) = G^{(0)}_\sigma (\mathbf{x}_i - \mathbf{x}_j, \tau_i-\tau_j)\; ,~~~i,j=1,\dots,p\; . \label{matrix} \end{equation} For equal number of spin-up and spin-down particles $\det \mathbf{A}^\uparrow \det \mathbf{A}^\downarrow = | \det \mathbf{A} |^2$, and \emph{the sign problem is absent}. \footnote{At half filling, the sign of $U$ changes if the hole representation is used for one of the spin components. Hence, this method is also applicable to the half-filled repulsive Hubbard model. } Graphically, Feynman diagrams in this representation are just collections of vertices, see \fref{fig:Z}. For future use, we define the set of all possible vertex configurations (\ref{conf-p}) by $\mathfrak{S}^{(Z)}$, \textit{i.e.,} $\mathfrak{S}^{(Z)} = \{p, \{ \mathcal{S}_p \} \}$. The following two-point pair correlation function will prove useful: \begin{equation} G_2(\mathbf{x}\tau; \mathbf{x}'\tau')\; =\; % \left\langle\, \mathcal{T}_\tau P(\mathbf{x},\tau) P^\dagger(\mathbf{x}',\tau')\, \right\rangle \; \equiv \; {g_2(\mathbf{x}\tau; \mathbf{x}'\tau') \over Z}\; , \label{corr} \end{equation} \begin{equation} g_2(\mathbf{x}\tau; \mathbf{x}'\tau') \; =\; {\rm Tr}\, \mathcal{T}_\tau P(\mathbf{x},\tau) P^\dagger(\mathbf{x}',\tau')\, {\rm e}^{-\beta H} \; , \label{g_2} \end{equation} where $P(\mathbf{x},\tau)$ and $P^\dagger(\mathbf{x}',\tau')$ are the pair annihilation and creation operators in the Heisenberg picture, respectively: $P(\mathbf{x},\tau) = c_{\mathbf{x}\uparrow}(\tau) c_{\mathbf{x}\downarrow}(\tau)$. The non-zero asymptotic value of $G_2(\mathbf{x}\tau; \mathbf{x}'\tau')$ as $|\mathbf{x}-\mathbf{x}'| \to \infty$ is proportional to the condensate density. Feynman diagrams for $g_2(\mathbf{x}\tau; \mathbf{x}'\tau')$ are similar to those for $Z$, but contain two extra elements: a pair of two-point vertices with two incoming (outgoing) ends which represent $P(\mathbf{x},\tau)$ ( $P^\dagger(\mathbf{x}',\tau')$ ), see \fref{fig:G}. The vertex configurations for the correlation function~(\ref{corr}) slightly differ from those for the partition function (\ref{conf-p}) by the presence of the two extra elements: the configuration space for Eq.\ (\ref{corr}) is $\mathfrak{S}^{(G)} = \{ p, \{ \tilde{\mathcal{S}}_{p} \} \}$, with \begin{equation} \tilde{\mathcal{S}}_p = \{ P(\mathbf{x},\tau),\, P^\dagger(\mathbf{x}',\tau'),\, (\mathbf{x}_j,\tau_j),~~j=1,\dots,p \} \; . \label{conf-p-corr} \end{equation} The partially summed diagrammatic expansion for $g_2(\mathbf{x}\tau; \mathbf{x}'\tau')$ is similar to Eq.\ (\ref{Z-summed}): \begin{equation} g_2(\mathbf{x}\tau; \mathbf{x}'\tau') = \sum_{p=0}^\infty (-U)^p \sum_{\tilde{\mathcal{S}}_p} \det \widetilde{\mathbf{A}}^{\uparrow}(\tilde{\mathcal{S}}_p) \det \widetilde{\mathbf{A}}^{\downarrow}(\tilde{\mathcal{S}}_p)\; , \label{G-summed} \end{equation} where $\widetilde{\mathbf{A}}^\sigma(\tilde{\mathcal{S}}_p)$ is a $(p+1)\times(p+1)$ matrix which differs from Eq.\ (\ref{matrix}) only by that it has an extra row $i_0$ and an extra column $j_0$ such that $\widetilde{A}^\sigma_{ij_0} = G^{(0)}_\sigma (\mathbf{x}_i - \mathbf{x}, \tau_i-\tau)$ and $\widetilde{A}^\sigma_{i_0 j} = G^{(0)}_\sigma (\mathbf{x}' - \mathbf{x}_j, \tau' - \tau_j)$. \begin{figure} \includegraphics[width=0.75\columnwidth,keepaspectratio=true]{figure3.eps} \caption{ Diagrammatic series for the correlation function (\ref{corr}). Diamonds represent the two-point creation/annihilation operators $P$ and $P^\dagger$. } \label{fig:G} \end{figure} Below we deal only with equal number of spin-up and spin-down fermions and (for the sake of brevity) suppress the spin indices wherever possible. We also peruse the generic notation (with superscripts) $\mathcal{D}(\mathcal{S}_p)$ for $p$-th order terms of the diagrammatic expansions similar to (\ref{Z-summed}), e.g., $\mathcal{D}^{(Z)}(\mathcal{S}_p) = (-U)^{p}\bigl| \det \mathbf{A}(\mathcal{S}_p) \bigr|^2$. To simplify the notation we also omit superscripts if this does not lead to ambiguity. \subsection{Diagrammatic Monte Carlo and Worm algorithm} \label{ssec:MC} Equations\ (\ref{Z-summed}) and (\ref{G-summed}) have similar general structure of a series of integrals and sums with ever increasing number of integration variables and summations. In Refs.\ % \cite{diagrMC,polaron00} it has been shown how to arrange a numerical procedure that sums such convergent series. To this end one considers the space of all possible vertex configurations $\mathfrak{S}$ (~for the series (\ref{Z-summed}) $\mathfrak{S} \equiv \mathfrak{S}^{(Z)}$, while for the series (\ref{G-summed}) $\mathfrak{S} \equiv \mathfrak{S}^{(G)}$~), with the ``weight'' $\mathcal{D}(\mathcal{S}_p)$ associated with each element of the space. One then uses the Metropolis principle \cite{Metropolis} to arrange a stochastic Markov process which sequentially generates vertex configurations $\mathcal{S}_p$ according to their weights $\mathcal{D}(\mathcal{S}_p)$, thus sampling the space $\mathfrak{S}$. In the course of sampling, one also collects statistics for observables in the form of MC estimators, see Sec.\ \ref{ssec:estimators}. The stochastic process consists of elementary MC updates performed on vertex configurations $\mathcal{S}_p$. The set of updates is problem-specific being restricted only by the requirements of (i) the ergodicity, \textit{i.e.,} given a particular diagram $\mathcal{S}_p$ it takes a finite number of steps to convert it into any other diagram $\mathcal{S}'_q$, and (ii) detailed balance, \textit{i.e.,} relative contributions of diagrams $\mathcal{S}_p$ and $\mathcal{S}'_q$ to the statistics is given by the ratio of their weights, $\mathcal{D}(\mathcal{S}_p)/\mathcal{D}(\mathcal{S}'_q)$. The set of updates satisfying these requirements is not unique, the freedom is used to maximize the efficiency of simulations, as explained in detail in \ref{sec:updates}. In view of close similarity between the diagrammatic expansions (\ref{Z-summed}) and (\ref{G-summed}) it is advantageous to construct a Monte Carlo process which samples these two series in a single simulation. This way one has access to both diagonal, \textit{e.g.,} energy, and off-diagonal properties, \textit{e.g.,} the superfluid response. An efficient way of performing such concurrent simulation is provided by the worm algorithm, which was originally devised for the worldline Monte Carlo simulations \cite{worm}. In the context of the diagrammatic determinant Monte Carlo, the generic worm algorithm principles imply the following. First, one works in the joint configuration space $\mathfrak{S}^{(Z)} \cup \mathfrak{S}^{(G)} $, accommodating diagrams (\ref{conf-p}) and (\ref{conf-p-corr}). Second, all the updates are performed {\it exclusively} in terms of the two-point vertices $P(\mathbf{x},\tau)$ and $P^\dagger(\mathbf{x},\tau)$---through their creation/annihilation, motion, and ``interactions" with adjacent vertices. The worm-type updating procedures are further detailed in \ref{sec:updates}. Within the worm-algorithm framework, the configuration spaces $\mathfrak{S}^{(Z)}$ and $\mathfrak{S}^{(G)}$ are disjoint subsets of one extended configuration space. In what follows we will refer to them as $Z$-(or ``diagonal'') and $G$-(or ``off-diagonal'') sectors of the configuration space. Formally, the extended configuration space corresponds to the generalized partition function \begin{equation} Z_\mathrm{W}\; =\; Z + \zeta \sum_{\mathbf{x}, \mathbf{x}'} \int_0^{\beta} \rmd\tau \int_0^{\beta}\rmd\tau' g_2(\mathbf{x}\tau;\mathbf{x}'\tau')\; , \label{Z-worm} \end{equation} where the value of $\zeta$ is arbitrary~--- it controls the relative statistics of $Z$- and $G$-sectors and the efficiency of simulation. \subsection{Monte Carlo estimators } \label{ssec:estimators} Suppose we have an observable $X(\alpha)$, which depends on a set of variables $\alpha$, \textit{e.g.} temperature and chemical potential. A MC estimator for the observable $X$ is an expression which, upon averaging over the sequence of MC configurations converges to the expectation value of $X(\alpha)$. In accordance with Eq.\ (\ref{Z-worm}), the simplest worm-algorithm MC estimators are \begin{equation} \delta^{(Z)}(\mathcal{S}_p)\; =\; % \cases{ 1,& $\mathcal{S}_p \in \mathfrak{S}^{(Z)}$\; , \\ 0,& $\widetilde{\mathcal{S}}_p \in \mathfrak{S}^{(G)}$\; ,\\ } \label{delta-Z} \end{equation} and \begin{equation} \delta^{(G)}(\mathcal{S}_p)\; =\; % \cases{ 0,& $\mathcal{S}_p \in \mathfrak{S}^{(Z)}$\; , \\% 1,& $\widetilde{\mathcal{S}}_p \in \mathfrak{S}^{(G)}$\; .\\ } \label{delta-G} \end{equation} Their MC averages are \begin{equation} \bigl\langle \delta^{(Z)} \bigr\rangle_\mathrm{MC}\; \longrightarrow \; Z/Z_W\; , \label{delta-Z-MC} \end{equation} and \begin{equation} \bigl\langle \delta^{(G)} \bigr\rangle_\mathrm{MC}\; \longrightarrow \; Z_W^{-1}\zeta \sum_{\mathbf{x}, \mathbf{x}'} \int_0^{\beta} \rmd\tau \int_0^{\beta}\rmd\tau' \, g_2(\mathbf{x}\tau;\mathbf{x}'\tau') \; , \label{delta-ZG-MC} \end{equation} where $\langle \dots \rangle_\mathrm{MC}$ denotes averaging over the set of stochastically generated configurations. In particular, for our purposes it will be quite useful that \begin{equation} \frac{\bigl\langle \delta^{(G)} \bigr\rangle_\mathrm{MC}} {\bigl\langle \delta^{(Z)} \bigr\rangle_\mathrm{MC}}\; \longrightarrow \; \zeta \sum_{\mathbf{x}, \mathbf{x}'} \int_0^{\beta} \rmd\tau \int_0^{\beta}\rmd\tau' \, G_2(\mathbf{x}\tau;\mathbf{x}'\tau') \; . \label{useful} \end{equation} The general rules for constructing an estimator for a quantity $X(\alpha)$ specified by the diagrammatic expansion \begin{equation} X(\alpha) = \sum_{\mathcal{S}_p} \mathcal{D}^{(X)}(\alpha ; \mathcal{S}_p)\; , \label{XY} \end{equation} are standard. We adopt a convenient convention: If the actual summation in (\ref{XY}) involves only a subset $\mathfrak{S}_0$ of vertex configurations---a typical example is an expansion defined within the $Z$-sector only---then we extend summation over the entire configuration space by simply defining $\mathcal{D}^{(X)}(\mathcal{S}_p \not\in \mathfrak{S}_0)\equiv 0$. If vertex configurations $\mathcal{S}_p$ are sampled from the probability density $\mathcal{D}^{(Z_W)}(\mathcal{S}_p)$ which comes from the expansion for the generalized partition function: \begin{equation} Z_W(\alpha) = \sum_{\mathcal{S}_p} \mathcal{D}^{(Z_W)}(\alpha; \mathcal{S}_p)\; , \label{Z_W_series} \end{equation} then the MC estimator for $X(\alpha)$ is derived from \begin{equation} \bigl\langle X \bigr\rangle \,\equiv \, \frac{\bigl\langle \mathcal{Q}^{(X)} \bigr\rangle_\mathrm{MC}}{\bigl\langle \delta^{(Z)} \bigr\rangle_\mathrm{MC}} \; , \label{esti} \end{equation} as \begin{equation} \mathcal{Q}^{(X)}(\alpha;\mathcal{S}_p) \; =\; \frac{\mathcal{D}^{(X)}(\alpha;\mathcal{S}_p)} {\mathcal{D}^{(Z_W)}(\alpha;\mathcal{S}_p )} \; . \label{esti-2} \end{equation} In what follows, by the estimator for a quantity $x(\alpha)=X(\alpha)/Z(\alpha)$ we basically mean corresponding function $\mathcal{Q}^{(X)}(\alpha;\mathcal{S}_p)$. \subsubsection{Estimators for number density and kinetic energy} \label{sssec:dens} We start with the estimator for the number density. The expectation value of the number density reads \begin{equation} \nu \; =\; \frac{2 \Tr c^{\dagger}_{\mathbf{x} \sigma}(\tau) c_{\mathbf{x} \sigma}(\tau) e^{-\beta H} }{Z}\; . \label{dens-gibbs} \end{equation} Here $(\mathbf{x},\tau)$ is an arbitrary space-time point (the system is space/time translational invariant) and $\sigma$ is one of the two spin projections; the factor of 2 comes from the spin summation. The diagrammatic expansion of the numerator is similar to that for $Z$, with the diagram weight given by \begin{equation} \mathcal{D}^{(\nu)}(\mathcal{S}_p)\; =\; 2\, (-U)^p \, \det \mathbf{B}^{\sigma}_{p+1}(\mathcal{S}_p, \mathbf{x},\tau) \, \det \mathbf{A}^{-\sigma}_{p}(\mathcal{S}_p)\; , \label{w-dens} \end{equation} Here $\mathcal{S}_p \in \mathfrak{S}^{(Z)}$, $\mathbf{A}^{-\sigma}_p(\mathcal{S}_p)$ is a $p \times p$ matrix (\ref{matrix}), and $\mathbf{B}^{\sigma}_{p+1}(\mathcal{S}_p, \mathbf{x},\tau)$ is a similar $(p+1)\times (p+1)$ matrix with an extra row and a column, corresponding to the extra creation and annihilation operators in the numerator of (\ref{dens-gibbs}), respectively. This immediately leads to the following estimator (\ref{esti-2}) for the number density \begin{equation} \mathcal{Q^{(\nu)}}(\mathcal{S}_p)\; =\; 2\; \frac{ \det \mathbf{B}^ {\sigma}_{p+1}(\mathcal{S}_p, \mathbf{x},\tau) }% { \det \mathbf{A}^{\sigma}_p(\mathcal{S}_p) }\; \delta^{(Z)}(\mathcal {S}_p)\; . \label{R-dens} \end{equation} We utilize the freedom of choosing $(\sigma,\mathbf{x},\tau)$ to suppress autocorrelations in measurements. The density measurement starts with randomly generated $(\sigma,\mathbf{x},\tau)$. The estimator for kinetic energy is derived similarly. One employs the coordinate-space expression for the kinetic energy in terms of hopping operators and deals with a slightly generalized version of Eq.~(\ref{dens-gibbs}): \begin{equation} \langle\, c^{\dagger}_{\mathbf{x+g}, \sigma}\, c_{\mathbf{x} \sigma}\, \rangle \; =\; \, \frac{ \Tr c^{\dagger}_{\mathbf{x+g}, \sigma}(\tau) \, c_{\mathbf{x} \sigma}(\tau) \, e^{-\beta H} }{Z}\; . \label{eps} \end{equation} The rest is identical to the previous discussion up to replacement $\mathbf{B}^{\sigma}_{p+1}(\mathcal{S}_p, \mathbf{x},\tau) \; \to \; \mathbf{B}^{\sigma}_{p+1}(\mathcal{S}_p, \mathbf{x},{\bf g}, \tau)$, since now the spatial position of the creation operator is shifted from that of the annihilation operator by the vector ${\bf g}$. In our case, only the nearest-neighbor correlator (\ref{eps}) has to be computed. \subsubsection{Estimator for the interaction energy} \label{sssec:PE} The estimator for the interaction energy \begin{equation} \langle H_1 \rangle \; =\; { \Tr H_1 e^{-\beta H} \over Z} \label{PE} \end{equation} is readily constructed using a generic trick of finding the expectation value of operator in terms of which the perturbative expansion is performed. Consider the Hamiltonian $H(\lambda) = H_0 + \lambda H_1$ and observe that \begin{equation} \Tr H_1 e^{-\beta H}\; =\; -{1\over \beta}\, {\partial \over \partial \lambda} \, \Tr e^{-\beta H} \; \equiv\; -{1\over \beta}\, {\partial Z \over \partial \lambda} \; . \label{PE-identity} \end{equation} The differentiation of Eq.~(\ref{Z-diagr}) for $Z=Z(\lambda)$ and letting $\lambda =1 $ afterwards is straightforward since the diagram of order $p$ is proportional to $\lambda^p$. Hence \begin{equation} \mathcal{Q}^{(H_1)}\left( \mathcal{S}_p\right)\; =\; - \beta^{-1}p\; \delta^{(Z)}(\mathcal{S}_p) \; . \label{R-PE} \end{equation} \subsubsection{Estimator for the integrated correlation function} \label{sssec:g-im} Following the general treatment of Ref.\ \cite{polaron00}, one can construct an estimator for the correlation function (\ref{corr}). In this work we just need to sum and integrate this correlator over all its variables (see Sec.\ \ref{sec:sense}): \begin{equation} K(L,T)\; =\; (\beta L^d)^{-2} \sum_{\mathbf{x}, \mathbf{x}'} \int_0^{\beta} \rmd\tau \int_0^{\beta}\rmd\tau' G_2(\mathbf{x}-\mathbf{x}',\, \tau - \tau')\; , \label{Q-rescaled} \end{equation} and the estimator for $K(L,T)$ is particularly simple: \begin{equation} \mathcal{Q}^{(K)}(\mathcal{S}_p)\; =\; (\beta L^d)^{-2} \zeta^{-1} \delta^{(G)}(\mathcal{S}_p) \; . % \label{R-g-im} \end{equation} \section{Extrapolation towards macroscopic continuum system} \label{sec:sense} The MC setup discussed in Sec.\ \ref{sec:ddmc} works in the grand canonical ensemble with external parameters $\{L,T,\mu \}$. In order to extract the critical temperature of a continuum gas from MC data, one has to perform the two-step extrapolation. (i) Upon taking the limit of $L\to \infty$ one obtains $T_c(\mu)$, the critical temperature of a lattice system at a given chemical potential, and translates it into $T_c(\nu)$ by extrapolating the measured filling factor to the infinite system size: $\nu\, \equiv\, \nu(\mu,\, T=T_c(\mu), \, L\to\infty)$. (ii) The extrapolation towards the continuum limit is then done by taking the limit of $\nu\to 0$. The latter procedure is based on results presented in Sec.\ \ref{sec:2body}. The finite-size extrapolation is performed by considering a series of system sizes $L_1 < L_2 < L_3 \,\dots$\, . At the critical point the correlation function (\ref{corr}) decays at large distances as a power-law: $G_2(\mathbf{x}-\mathbf{x}',\, \tau - \tau') \propto |\mathbf{x}-\mathbf{x}'|^{-(1+\eta)}$, where $\eta$ is the anomalous dimension \cite{Fisher}. Since we expect the transition to belong to U(1) universality class, we take $\eta \approx 0.038$. If one rescales the summed correlator (\ref{Q-rescaled}) according to \begin{equation} R(L,T) = L^{1+\eta} K(L,T)\; , \label{K-rescaled} \end{equation} the corresponding quantity is supposed to become size-independent at the critical point, i.e. the crossing of $R(L_i,T)$ and $R(L_j,T)$ curves can be used to obtain an estimate $T_{L_i,L_j}$ for the critical temperature $T_c(\mu)$ \cite{Binder81}. Indeed, for temperatures in the vicinity of the critical point the correlation length diverges as $\xi_\mathrm{corr} \propto |t|^{-\nu_\xi}$, where $t=(T_c(\mu)-T)/T_c(\mu)$, and $\nu_\xi\approx 0.671$ for the U(1) universality class. In the renormalization group (RG) framework \cite{Fisher}, the finite-size scaling of the rescaled correlator $R$ obeys the relation \begin{equation} R = f\left(x\right)(1 + cL^{-\omega}+\dots)\; , \label{RG1} \end{equation} where $x=(L/\xi_\mathrm{corr})^{1/\nu_\xi}$ is the dimensionless scaling variable, $f(x)$ is the universal scaling function analytic at $x = 0$, $c$ is a non-universal constant, $\omega\approx 0.8$ is the critical exponent of the leading irrelevant field \cite{Zinn-Justin}, and dots represent higher-order corrections to scaling. If the irrelevant field corrections were not present, all $R(L_i,T)$ curves would intersect at a unique point, $T_c(\mu)$. Expanding Eq.\ (\ref{RG1}) up to terms linear in $t$ one obtains for the crossing $T_{L_i,L_j}$ \begin{equation} T_{L_i,L_j} - T_c(\mu)\; =\; \frac{\mathrm{const}}{L_j^{1/\nu_\xi+\omega}}% \, \frac{\left( L_j/L_i \right)^{\omega} - 1 }% { 1 - \left( L_i/L_j \right)^{1/\nu_\xi} }\; . \label{Tc-fit} \end{equation} To employ Eq.\ (\ref{Tc-fit}) one performs a linear fit of the sequence of $T_{L_i,L_j}$ against the right hand side of Eq.\ (\ref{Tc-fit}) for several pairs of system sizes. The intercept of the best-fit line yields the thermodynamic limit critical temperature $T_c(\mu)$. Note that if the universality class is not U(1) and the values of $\eta$, $\nu_{\xi}$, $\omega$ are different, or system sizes are too small to justify the scaling limit, the whole procedure fails. Hence, the adopted scheme of pinpointing $T_c$ features a built-in consistency check. \section{Thermodynamic scaling functions of a unitary Fermi gas} \label{sec:thermodynamics} As has been noted in Sec.\ \ref{sec:intro}, the only relevant microscopic energy scale in the continuum unitary fermi gas is given by the Fermi energy $\varepsilon_F = \kappa\, {\hbar^2 n^{2/3} / m}$, where $\kappa = (3\pi^2)^{2/3}/2$ for a two-component Fermi gas. Therefore, as it was first noticed in Ref.\ \cite{Ho04}, all thermodynamic potentials feature self-similarity properties and can be written in terms of dimensionless scaling functions of the dimensionless temperature $x=T/\varepsilon_F $. All scaling functions are mutually related; it is sufficient to know just one of them to unambiguously restore the rest. Apart from the shape of scaling functions, the self-similarity at the unitary point is {\it identical} to that of a non-interacting gas Fermi gas \cite{Vtom}, including functional relations between different thermodynamic potentials. A derivation of scaling functions and relations between them can be found in Ref.\ \cite{Ho04}. In this section, we render the scaling analysis in a form convenient for our MC study. In terms of the dimensionless chemical potential $y = \mu /\varepsilon_F$, the dimensionless equation of state reads $y = f_{\mu}(x)$. The $f_{\mu}$ function can be calculated numerically. Another quantity which is also available in our simulation is the dimensionless energy per particle ${E/(N \varepsilon_F)}= f_E(x)$. The scaling relations for other thermodynamic quantities are defined likewise. For instance, the entropy and pressure read $ S/N = f_S(x)$, and $ {P/(n \varepsilon_F)}=f_P(x) $. Though $f_S$ and $f_P$ are not directly calculated in our simulation, and we can relate them to $f_E$. It is also important to relate $f_{\mu}$ to $f_E$, since this yields a consistency check for the numerical results. To establish desired relations, we start with the scaling relation for the Helmholtz free energy ${F/(N \varepsilon_F)} = f_F(x)$ which in canonical variables reads \begin{equation} F(T,N,V) \; = \; \gamma \, f_F\left( T/\gamma(N/V)^{2/3}\right) \, (N/V)^{2/3}\, N \; , \label{} \end{equation} where $\gamma = {\kappa \hbar^2 / m}$. Then, for the entropy and pressure we have (the prime stands for the derivative) \begin{eqnarray} f_S \; = \; -f_F'\, , \label{S2} \\ f_P \; = \; (2/3) (f_F - f_F'\, x) \; . \label{P2} \end{eqnarray} The expression for energy in terms of $f_F$ is \begin{equation} f_E\; =\; f_F - f_F'\, x \; . \label{E2} \end{equation} We thus see that \begin{equation} f_P\; \equiv \; (2/3)f_E \; . \label{P3} \end{equation} On may also consider Eq.~(\ref{E2}) as a differential equation: \begin{equation} f_F - f_F'\, x \; = \; f_E \; \label{rel0} \end{equation} to be integrated with respect to $f_F$ from $x = \infty$ down to finite $x$, taking advantage of known asymptotic behaviour of $f_F$ and $f_E$ for the weakly interacting two-component gas. Now, we note that from the general thermodynamic relation $E = -PV+TS+\mu N$ it immediately follows that \begin{equation} f_E \; = \; -f_P+xf_S+f_{\mu} \; , \label{rel3} \end{equation} which, in turn, leads to the following relations \begin{eqnarray} f_S\; =\; {(5/3)f_E-f_{\mu} \over x} \; , \label{S3} \\ f_F\; =\; f_{\mu} -(2/3)f_E \; , \label{F2} \end{eqnarray} that allow one to express $f_S$ and $f_F$ functions through numerically available functions $f_E$ and $f_{\mu}$. Another useful relation is \begin{equation} f_F'\; =\; {f_{\mu} -(5/3)f_E\over x} \; , \label{F_prime} \end{equation} which allows to extract $f_F'$ directly from $f_E$ and $f_{\mu}$, and thus provides a simple check for the data consistency: The result (\ref{F2}) for the $f_F$ curve should be consistent with the the derivative deduced from (\ref{F_prime}). By integrating equation (\ref{rel0}) we get \begin{equation} f_F(x)\; =\; C_0\, x\, -\, {3\over 2}\, x\ln x\, -\, x\int_x^\infty \left ( {3\over 2}{1\over x_0}\, -\, {f_E \over x_0^2}\right) \rmd x_0 \; . \label{F3} \end{equation} Here we took into account the asymptotic ideal-gas behaviour of $f_E$: \begin{equation} f_E(x) \; \to \; {3\over 2}\, x~~~~~~~{\rm at}~~~~~~~x\; \to \; \infty \; , \label{ass_E} \end{equation} and introduced the corresponding term into the integral to render the latter convergent. The free constant of integration, $C_0$, can be restored from (\ref{F2})-(\ref{ass_E}) combined with the asymptotic ideal-gas behaviour of $f_\mu$, \begin{equation} f_\mu (x) \; \to \; -{3\over 2}\, x \ln \left( {\kappa \over 2 \pi}\, x \right )~~~~~~~{\rm at}~~~~~~~x\; \to \; \infty \; . \label{ass_mu} \end{equation} The result is \begin{equation} C_0\; =\; {3\over 2}\ln \left( {2 \pi\over \kappa } \right ) \, -\, 1 \; . \label{C_0} \end{equation} Note that if higher-order terms in the asymptotic ($x\to \infty$) behaviour of $f_E(x)$ are also known, then (\ref{F3}) can be used to establish the corresponding corrections for $f_F(x)$ and other scaling functions. For instance, it has been found in Ref.\ \cite{HoMueller04} that, as $x\to\infty$ \begin{equation} f_E(x) \to {3\over 2}\, x - {9\over 8}\, \left( {\pi \over \kappa } \right)^{3/2}{1\over \sqrt{x}} . \label{H_M_E} \end{equation} In accordance with (\ref{F3}), this implies ($x\to\infty$) \begin{eqnarray} f_F(x) \to C_0\, x - {3\over 2}\, x\ln x - {3\over 4}\, \left( {\pi \over \kappa } \right)^{3/2}{1\over \sqrt{x}}, \label{H_M_F}% \\ f_\mu(x) \to -{3\over 2}\, x \ln \left( {\kappa \over 2 \pi}\, x \right ) - {3\over 2}\, \left( {\pi \over \kappa } \right)^{3/2}{1\over \sqrt{x}} . \label{H_M_mu} \end{eqnarray} Finally, the scaling functions $\mathcal{W}_0$ and $\mathcal{G}_0$ defined in Ref.\ \cite{Ho04} are related to $f_E$ and $f_{\mu}$ as follows: \begin{eqnarray} \mathcal{W}_0 \left(f_{\mu}(x)/x\right) \; \equiv\; \frac{40}{9\sqrt{\pi}}\, \frac{f_E(x)}{x^{5/2}}, \\ \mathcal{G}_0 \left(x/f_{\mu}(x)\right) \; \equiv\; \frac{5}{3}\, \frac{f_E(x)}{f_{\mu}(x)^{5/2}}\; . \label{Ho-functions} \end{eqnarray} \subsection{Trapped Fermi gas \label{ssec:LDA}} So far we have considered the uniform Fermi gas, while in experimental realizations \cite{BEC-BCS-expt,Hulet-PRL2005,Grimm-in-situ,Hulet-Science2006,expt-univ-energy,expt-ramp} one has to deal with the parabolic trapping potential. The standard procedure, especially in systems with short ``healing length'' is to use the local density approximation (LDA), \textit{i.e.} to replace the chemical potential with its coordinate-dependent counterpart $\mu(\mathbf{r})=\mu - V(\mathbf{r})$. This procedure can be easily combined with MC results as follows. We introduce the dimensionless variable $u=\mu/T=f_\mu(x)/x$ and define the scaling function for the number density as $w_{n}=x^{-3/2}$. This is equivalent to the parametric $\{ u(x), w_n(x)\}$ dependence of $n$ on $u$. The scaling functions for other thermodynamic quantities are defined in a similar manner, e.g. $w_{E}(u) \equiv f_E(x(u))$ for the energy, and $w_S \equiv f_S (x(u))$ for the entropy. Within LDA $u$ acquires the coordinate dependence $u(\mathbf{r})=(\mu-V(\mathbf{r}))/T$ which translates into the density profile $n(\mathbf{r};\mu,T) = w_n(u(\mathbf{r})) (mT/\kappa\hbar^2)^{3/2}$. Likewise, other thermodynamic functions are to be understood as local, coordinate-dependent quantities. Consider the case of $N$ particles in a cigar-shaped parabolic trap, characterized by the axial and radial frequencies $\omega_\|$ and $\omega_\perp$. The characteristic energy in this case is $E_F = (3N)^{1/3} \hbar (\omega_\|^2\omega_\perp)^{1/3}$, which would coincide with the Fermi energy for a non-interacting gas in the trap. Note, that we denote the Fermi energy in the trap by an upright capital $E_F$ in order to avoid confusion with the uniform system Fermi energy $\varepsilon_F$. By integrating over the radial coordinates one obtains the axial density profile $n_a(z)$ ($z$ is an axial coordinate) in the form \begin{equation} \frac{n_a(z)}{N} = \frac{(2T/E_F)^{5/2}}{\pi L_\|} \overline{w}_n\left( \frac{\mu}{T} - \frac{z^2}{2(T/E_F)L_\|^2} \right), \label{n_a} \end{equation} where $L_\| = \lambda^{1/3} (3N)^{1/6} l_\|$, the aspect ratio $\lambda=\omega_\perp / \omega_\|$, the oscillator length $l_\|^2= \hbar/m\omega_\|$, and \begin{equation} \overline{w}_n(p) = \int_{-\infty}^{p} w_n(u) \rmd u. \label{wbar} \end{equation} By integrating Eq.\ (\ref{n_a}) with respect to $z$, one finally relates chemical potential to temperature: \begin{equation} \overline{\overline{w}}_n\left(\frac{\mu}{T}\right) \left( \frac{T}{E_F} \right)^3 = \frac{\pi}{16}, \label{wbarbar} \end{equation} where \begin{equation} \overline{\overline{w}}_n\left( p \right) = \int_{0}^{\infty} \overline{w}_n(p-q^2) \rmd q. \label{wbarbar_def} \end{equation} Obtaining the temperature dependence of thermodynamic functions for a non-uniform system within LDA is also straightforward. For the total energy of a cloud $E_\mathrm{tot}$ we obtain \begin{equation} \frac{E_\mathrm{tot}}{N E_F} = \frac{16}{\pi}\left( \frac{T}{E_F}\right)^4 \int_{-\infty}^{\mu/T} \rmd p \int_{0}^{\infty} \rmd q\, w_E(p-q^2) w_n^{5/3}(p-q^2), \label{Etot} \end{equation} and likewise for the entropy: \begin{equation} \frac{S}{N} = \frac{16}{\pi}\left( \frac{T}{E_F}\right)^3 \int_{-\infty}^{\mu/T} \rmd p \int_{0}^{\infty}\rmd q\, w_S(p-q^2) w_n(p-q^2). \label{Stot} \end{equation} \section{Results and Discussion} \label{sec:results} We performed simulations outlined in previous Sections for filling factors ranging from $0.95$ down to $0.06$ with up to about 300 fermions on lattices with up to $16^3$ sites. The typical rank of determinants involved in computations of acceptance ratios (Sec.\ \ref{sec:updates}) and estimators (Sec.\ \ref{ssec:estimators}) is up to $M \sim 5000$. Since we only need ratios of determinants, we use fast-update formulas \cite{Rubtsov} to reduce the computational complexity of updates from $M^3$ down to $M^2$. We validate our numerical procedure by comparing results against the exact diagonalization data for a $4\times 4$ cluster \cite{Husslein}, and other simulations of the critical temperature at quarter filling $\nu=0.5$ \cite{Zotos,DCA} and $\nu=0.25$ \cite{DCA}. In all cases we find agreement within statistical errors of a few percent. \subsection{Critical temperature} \label{ssec:tc} Figure \ref{fig:crossing} shows a typical example of the finite-size analysis outlined in Sec.\ \ref{sec:sense}. Despite the fact that numerically accessible system sizes are quite small, \fref{fig:crossing} (and similar analysis for the whole range of filling factors) supports expectations that the universality class for the SF phase transition is U(1). The finite-size analysis allows us to pinpoint the phase transition temperature to within a few percent. \begin{figure} \includegraphics[width=0.75\columnwidth,keepaspectratio=true]{figure4.eps} \caption{ A typical crossing of the $R(L,T)$ curves. The errorbars are $2\sigma$, and solid lines are the linear fits to the MC points. Inset shows the finite-size scaling of the filling factor ($\nu$ vs $1/L$), which yields $\nu=0.148(1)$. From this plot and Eq.\ (\ref{Tc-fit}) one obtains $1/T_c(\nu) = 4.41(5)/t$ }. \label{fig:crossing} \end{figure} Shown in \fref{fig:tc} is the dependence of the critical temperature on the lattice filling factor. The critical temperature is measured in units of the Fermi energy, as is natural for the unitarity limit. We define the Fermi momentum for a lattice system with filling factor $\nu$ as $k_F = (3\pi^2 \nu)^{1/3}$ and the Fermi energy $\varepsilon_F = k_F^2$, as those of a continuum gas with the same effective mass and number density $n=\nu$. It is clearly seen that the presence of the lattice suppresses the critical temperature considerably, nearly by a factor of $4$, depending on the filling factor. Strong dependence of $T_c$ on $\nu$ is in apparent contradiction with Ref. \cite{Bulgac}, which claims weak or no $\nu$-dependence. This disagreement might be due to the difference in the single-particle spectra $\varepsilon_\mathbf{k}$ used: Ref.\ \cite{Bulgac} employs the parabolic spectrum with spherically symmetric cutoff, while we use the tight-binding dispersion law. Indeed, Eqs.\ (\ref{Gamma-A})-(\ref{Gamma-B}) indicate that a particular choice of $\varepsilon_\mathbf{k}$ does influence lattice corrections to $T_c$, which may even have different signs for different $\varepsilon_\mathbf{k}$. \begin{figure} \includegraphics[width=0.75\columnwidth,keepaspectratio=true]{figure5.eps} \caption{ The scaling of the lattice critical temperature with filling factor (circles). $\nu=1$ corresponds to the half filling. The errorbars are one standard deviation. The results of Ref.\ \cite{Zotos,DCA} at quarter filling and $\nu=0.25$ are also shown for a comparison. See the text for discussion. } \label{fig:tc} \end{figure} It is also clear from \fref{fig:tc} that close to half-filling $T_c$ is essentially constant, as expected (see, \textit{e.g.} \cite{Micnas90}). The predicted $\sim \nu^{1/3}$ scaling (\ref{nu-13}) sets in at about $\nu \approx 0.5$. We thus use a linear fit $T_c(\nu)/\varepsilon_F(\nu) = T_c/\varepsilon_F - \mathrm{const}\cdot \nu^{1/3}$ to eliminate lattice corrections in the final result. Such fitting procedure results in the best-fit line given by $0.152(7) - 0.13(2)\nu^{1/3}$. We further analyze the fit residues in order to estimate the effect of the sub-leading lattice corrections which are expected to be proportional to $\nu^{2/3}$. As shown in \fref{fig:residues}, such corrections, if any, are smaller than the uncertainty of the $\nu^{1/3}$ fit. This analysis yields the final result $T_c/\varepsilon_F = 0.152(7)$ for the continuum uniform gas, which is noticeably below the transition temperature in the BEC limit $T_\mathrm{BEC} = 0.218 \varepsilon_F$. Various approximate analytical treatments led in the past to $T_c$ either above \cite{NSR,Randeria95,Holland-Timmermans,Perali}, or below \cite{Haussmann94,Ohashi-Griffin,LiuHu} $T_\mathrm{BEC}$. \begin{figure} \includegraphics[width=0.75\columnwidth,keepaspectratio=true]{figure6.eps} \caption{ The fit residues for the best-fit line of \fref{fig:tc}, plotted versus $\nu^{2/3}$ (circles). Zero level is shown by the horizontal line, the blue dashed lines are linear fits to the data points for $\nu<0.5$ and $\nu<0.35$, respectively. } \label{fig:residues} \end{figure} Is is instructive to compare our results for $T_c$ to other numerical calculations available from the literature. The simulations of Ref. \cite{Wingate} yield $T_c = 0.05 \varepsilon_F$, but at the value of the scattering length which has not been determined precisely. This result most probably corresponds to a deep BCS regime, where the transition temperature is exponentially suppressed. Lee and Sch\"{a}fer \cite{Lee-Schaefer} report an upper limit $T_c < 0.14 \varepsilon_F$, based on a study of the caloric curve of a unitary Fermi gas down to $T/\varepsilon_F=0.14$ for filling factors down to $\nu=0.5$. The caloric curve of Ref.\ \cite{Lee-Schaefer} shows no signs of divergent heat capacity which would signal the phase transition. This upper limit is consistent with $T_c(\nu=0.5)/\varepsilon_F \approx 0.054$, see \fref{fig:tc}. The Seattle group has performed simulations of the caloric curve and condensate fraction, $n_0$, of the unitary gas, Ref.\ \cite{Bulgac}. Using ``visual inspection'' of the caloric curve shape the critical temperature was estimated in Ref.\ \cite{Bulgac} to be $T_c=0.22(3)\varepsilon_F$. Unfortunately, the authors did not perform the finite-size analysis and $\nu \to 0$ extrapolation. The overall shape of the caloric curve seem to be little affected by the finite volume of the system. This is hardly surprising since even in the thermodynamic limit $E(T)$ and its derivative $\rmd E/\rmd T$ are continuous at the transition point. These properties also make it hard to use non-quantitative measures for reliable estimates of critical parameters from the $E(T)$ curve. On the other hand, the condensate fraction which has singular properties at $T_c$ does show sizable finite-size corrections, see figure~1 of Ref.\ \cite{Bulgac}. At this point we note that scaling of the condensate fraction is identical to that for $K(L,T)$. In \fref{fig:crossing-b} we plot the data of Seattle's group as $n_0 L^{1+\eta}$ versus temperature. The intersection of scaled curves turns out to be inconsistent with the estimate for $T_c$ derived from the caloric curve inspection. \begin{figure} \includegraphics[width=0.75\columnwidth,keepaspectratio=true]{figure7upd.eps} \caption{ The finite-size scaling of the condensate fraction data from Ref.\ \cite{Bulgac}. Raw data points are rescaled similar to Eq.\ (\ref{K-rescaled}) by the $L^{1+\eta}$ factor. Shaded vertical strips represent results for $T_c/\varepsilon_F$ of this work and Ref.\ \cite{Bulgac}, respectively, solid lines are drawn to guide an eye.} \label{fig:crossing-b} \end{figure} \subsection{Thermodynamic functions} \label{ssec:res:therm} The filling factor dependence of thermodynamic quantities is similar to that of $T_c$: Figure~\ref{fig:emu-tc} displays the behaviour of energy and chemical potential along the critical line $T=T_c(\nu)$. The extrapolation towards $\nu\to 0$ yields for the continuum gas \begin{eqnarray} &E / (N\varepsilon_F) \; =\; 0.31(1)~~~~~~~~~ &(T=T_c)\; , \\ &\mu / \varepsilon_F\; =\; 0.493(14)~~~~~~~~~&(T=T_c) \; . \label{magic_numbers} \end{eqnarray} The numerical values for other thermodynamic functions at criticality can be easily restored using the formulas of Sec.\ \ref{sec:thermodynamics}. \begin{figure} \includegraphics[width=0.99\columnwidth,keepaspectratio=true]{figure8a_8b.eps} \caption{ Energy (left-hand panel) and chemical potential (right-hand panel) dependence on the filling factor along the critical isotherm $T=T_c(\nu)$. Dots are the MC results, dashed lines are the linear fits. } \label{fig:emu-tc} \end{figure} In order to elucidate the thermodynamic behaviour of the unitary gas, we performed simulations for a range of temperatures $T>T_c$. Shown in \fref{fig:emu} are the simulation results for energy and chemical potential for the continuum gas as functions of temperature. Each point was obtained using data analysis similar to that depicted in \fref{fig:emu-tc}. In the high-temperature region we simulated up to $80$ fermions on lattices with up to $32^3$ sites. In this region, the condition $\nu \ll 1$ is necessary but not sufficient for extrapolation to the continuum limit, for it is crucial to keep temperature much smaller than the bandwidth: $T \ll 6t$. As can be seen from \fref{fig:emu}, our results for both energy and chemical potential approach the virial expansion \cite{HoMueller04} as $T/\varepsilon_F \to \infty$. For $T/\varepsilon_F \leqslant 0.5$ our data are not far from the curve of Ref.\ \cite{Bulgac}. Though we do not have data points for $T<T_c$ there is still a reasonable agreement even at $T_c$ with the $T\to 0$ fixed-node MC values \cite{Giorgini-Carlson}. In this region, our results are consistent with a very weak dependence of energy and chemical potential on temperature, and the numeric values of both are consistent with the experimental results \cite{Grimm-in-situ,Hulet-Science2006,expt-univ-energy}. \begin{figure} \includegraphics[width=0.75\columnwidth,keepaspectratio=true]{figure9.eps} \caption{ The temperature dependence of the energy per particle (upper panel) and chemical potential (lower panel) of the unitary Fermi gas. Red circles are the MC results, black dotted lines and blue dashed lines correspond to the Boltzmann and non-interacting Fermi gases, respectively, the dot-fashed lines are the asymptotic prediction of Ref.\ \cite{HoMueller04} (plus the first virial Fermi correction), black triangles are the MC results of Ref.\ \cite{Bulgac}, and the purple stars denote the ground-state fixed-node MC results \cite{Giorgini-Carlson}. } \label{fig:emu} \end{figure} Using Eq.\ (\ref{F2}) and data from \fref{fig:emu} we deduce the dependence of free energy on temperature, see \fref{fig:consi}. We also use Eq.\ (\ref{F_prime}) to make sure that our MC data for energy and chemical potential are consistent with each other. \begin{figure} \includegraphics[width=0.75\columnwidth,keepaspectratio=true]{figure10.eps} \caption{ Free energy versus temperature. Red dots are the MC data, and dashes represent the errorbar range for derivative of free energy, calculated via Eq.\ (\ref{F_prime}). Black triangles are MC results of Ref.\ \cite{Bulgac}, purple star denotes the ground-state fixed-node MC result \cite{Giorgini-Carlson}, black dotted line shows the Boltzmann gas curve, and the blue dashed is the asymptotic prediction of Ref.\ \cite{HoMueller04}.} \label{fig:consi} \end{figure} \subsection{Trapped gas\label{ssec:therm}} As discussed in Sec.\ \ref{ssec:LDA}, the thermodynamic functions for the uniform case can be used for analysis of experimental system within the local density approximation. In this section we report preliminary results of our ongoing study of trapped gas experiments. It starts with the interpolation procedure which produces continuous functional behaviour for thermodynamic functions, consistent with the discrete set of simulated points. We use a piecewise-cubic ansatz with a smooth crossover to the virial expansion, Eq.\ (\ref{H_M_F}), for the free energy. Temperature dependence of both energy and entropy are then deduced using numerical integration of Eqs.~(\ref{Etot}) and (\ref{Stot}), respectively. As the trapped gas is cooled down, the superfluidity first sets in at the centre of the trap, where the density is the highest. Equation (\ref{wbarbar}) can be used to pinpoint this onset temperature: at $T=T_c$, $\mu/T=(\mu/\varepsilon_F^{0}) / (T/\varepsilon_F^{0})$, where $\varepsilon_F^{0}$ is the Fermi energy of the uniform gas with the density equal to the density at the trap centre. Using $T_c/\varepsilon_F^{0} = 0.152(7) $ and Eq. (\ref{magic_numbers}), one obtains $T_c/E_F = 0.20(2)$. We quote here a conservative estimate for the uncertainty, which incorporates both the uncertainty of the critical temperature itself, and a systematic uncertainty which stems from restoring the continuous functional dependence of the chemical potential out of the finite set of the Monte Carlo calculated points with finite errorbars. Experimentally, the temperature of the strongly interacting Fermi gas is not easily accessible. On the contrary, thermometry of the non-interacting Fermi gases is well established. In the adiabatic ramp experiments one starts from the non-interacting gas at some temperature [in units of Fermi energy] $\left( T/T_F \right) ^{0}$, and slowly ramps magnetic field towards the Feshbach resonance \cite{expt-ramp}, thus adiabatically connecting the system at unitarity to a non-interacting one. Assuming the entropy conservation during the magnetic field ramp, equation (\ref{Stot}) can be employed for the thermometry of the interacting gas: by matching the entropy of a non-interacting gas at the temperature $\left( T/T_F \right) ^{0}$ with the entropy calculated via Eq.\ (\ref{Stot}) one relates the initial temperature (before the magnetic field ramp) to the final temperature (after the ramp). We find that the onset of the superfluidity corresponds to $\left( T/T_F \right) ^{0} = 0.12 \pm 0.02$ (again, we quote here the most conservative estimate for the errorbar). This value seems to be somewhat lower than the value suggested by the experimental results \cite{expt-ramp}. Nevertheless, given the level of the noise in figure 4 of Ref.\ \cite{expt-ramp} the consistency is reasonable. An alternative thermometry can be build on recent advances in the experimental technique \cite{Grimm-in-situ,Hulet-Science2006} which made it possible to directly image the \textit{in situ} density profiles of the interacting system. Such density profiles can be directly fit to Eq.\ (\ref{n_a}), which gives the shape of the cloud depending on $\mu/T$ and $T/E_F$. By relating the chemical potential to the temperature using Eq.\ (\ref{wbarbar}), one is left with only one fitting parameter, $T/E_F$ (apart from a trivial fitting parameter $z_0$ which accounts for the overall shift of the cloud image off the trap centre). As an illustrative example of such procedure we have analyzed the experimental density profiles measured by the Rice's group \cite{Hulet-Science2006}, as depicted in \fref{fig:fit}. From this analysis we deduce the upper bound for the temperature in the experiments \cite{Hulet-Science2006} $T<0.1 E_F$, which is consistent with the results of the measurements of the condensate fraction \cite{Hulet-PRL2005}. Since in the experiments \cite{Hulet-PRL2005,Hulet-Science2006} the gas is very degenerate, one is able to put an upper limit on temperature only. \begin{figure}[th] \includegraphics[width=0.75\columnwidth,keepaspectratio=true]{figure11.eps} \caption{ Axial density density profiles: experimental data (dots) is from the data contained in figure 3 of \cite{Hulet-Science2006}. The full red line is calculated via Eq.\ (\ref{wbar})--(\ref{wbarbar}) with $T/E_F = 0.03$ (correspondingly, $T/\varepsilon_F^{(0)}=0.02$), dashed blue line corresponds to $T/E_F = 0.16$ ($T/\varepsilon_F^{(0)}=0.1$), and dot-dashed green line is for $T/E_F = 0.22$ ($T/\varepsilon_F^{(0)}=0.16$). In both cases we allowed for a horizontal displacement of a whole curve in the range of $|z_0| < 20 \mu$m. See the text for discussion.} \label{fig:fit} \end{figure} Note that if the temperature is known from, e.g. the adiabatic ramp experiments, Eqs.\ (\ref{n_a})-(\ref{wbarbar}) must reproduce the cloud shape \textit{without free parameters} (apart from $z_0$). \section{Conclusions} \label{sec:concl} We have developed a worm-type scheme within the systematic-error-free determinant diagrammatic Monte Carlo approach for lattice fermions. We applied it to the Hubbard model with attractive interaction and equal number of spin-up and spin-down particles. At finite densities, the model describes ultracold atoms in optical lattice. In the limit of vanishing filling factor, $\nu \to 0$, and fine-tuned (to the resonance in the two-particle $s$-wave channel) on-site attraction, a universal regime sets in, which is identical to the BCS-BEC crossover in the continuous space. In the present work, we confined ourselves to a special value of the on-site interaction, $U=U_*\approx -7.915 t$, corresponding to the divergent $s$-scattering length. At $U=U_*$ and $\nu \to 0$, the system reproduces the unitary point of the BCS-BEC crossover. The unitary regime is scale-invariant: all thermodynamic potentials are expressed in terms of dimensionless scaling functions of the dimensionless ratio $T/\varepsilon_F$ (temperature in units of Fermi energy). We obtained these scaling functions by extrapolating results for the Hubbard model to $\nu \to 0$. For the critical temperature of the superfluid-normal transition in the uniform case we found $T_c/\varepsilon_F=0.152(7)$. Our results form a basis for an unbiased thermometry of trapped fermionic gases in the unitary regime: In particular, we found (within the local density approximation) that for the parabolic confinement, the critical temperature in units of the characteristic trap energy $E_F$ is $T/E_F = 0.20(2)$. For the experimentally relevant case of an isentropic conversion of a gas from the non-interacting regime to the unitary regime we find that the onset of the superfluidity corresponds to the initial temperature (before the magnetic field ramp) $\left( T/T_F \right)^0 = 0.12\pm 0.02$, which is reasonably consistent with the experimental result \cite{expt-ramp}, to within the experimental noise. \ack We appreciate generosity of A.~Bulgac, P.~Magierski, and J.~Drut, who kindly provided us with their numeric data. We are also indebted to W.~Li and R.~Hulet for providing us with their unpublished experimental data. This research was enabled by computational resources of the Center for Computational Sciences and in part supported by the Laboratory Research and Development program at Oak Ridge National Laboratory. Part of the simulations were performed on the ``Hreidar'' cluster of ETH Z{\"u}rich. We also acknowledge partial support by NSF grants Nos. PHY-0426881 and PHY-0456261.
hep-ph/0605068
\section{Introduction} Factorization theorems \cite{collins89} for inclusive hard scattering processes are our main tool by which we quantitatively analyze cross sections, with hadrons involved, when a generic hard scale $Q^2$ is taken to infinity. Taking the Drell-Yan (DY) lepton-pair production as an example, it is well known that the cross section can be expressed as a convolution of a hard scattering coefficient function (or ``Wilson coefficient'') calculated perturbatively, and a non-perturbative, universal, parton distribution function (PDF) for each one of the incoming hadrons. Corrections to the factorized cross section scale as $1/(Q^2)^n$ where $n \geq 1$ up to some logarithmic ratios. However, it is also well known that fixed order, pQCD calculation of the Wilson coefficient yields singular distribution functions of the form \begin{eqnarray} \alpha_s^k\left[\frac{\ln^{m-1}(1-z)}{(1-z)}\right]_+,\,\,\,\, (m\leq 2k) \end{eqnarray} where $z=Q^2/{\hat s}$ and ${\hat s}$ is the total momentum squared of the incoming partons. The ``plus'' distributions are defined in the usual way. The appearance of such distributions is a result of an emission of soft and/or collinear gluons into the final state. When such distributions are Mellin transformed to the conjugate space, logarithms of the form $\alpha_s^k\ln^m{\overline N}$ ($m=2k,2k-1,...,0)$ show up where ${\overline N}\equiv N\exp(\gamma_E)$ is the conjugate variable of $z$ and $\gamma_E$ is the Euler constant. In the limit $z\rightarrow 1$ or, equivalently, large $\overline N$, fixed order perturbative calculation cannot be reliably trusted and an all order resummation of the large logarithms is needed. This is what is generically meant by ``threshold resummation''. This notion has evolved during the last 20 years into one of the most studied and highly developed subjects within perurbative quantum chromodynamics (pQCD). Earlier studies \cite{Ste87,{CatTre89}} supplied a sound and rigorous (although complicated) treatment to perform such resummation. In both of these works, resummation is performed after establishing some sort of factorized cross section into well-defined quantities (at the operator level) that capture the physics at the hard, jet and soft scales. An integral transformation to the conjugate space is then applied in order to de-convolute the various terms in the cross section. Then, in the conjugate space, energy evolution equations are solved and the exponentials thus obtained contain the resummed large logarithms. Thus the perturbative expansion is put under control and the contributions obtained from the yet uncalculated higher orders in $\alpha_s$ reduce the theoretical uncertainty inherent in any fixed order calculation. Thereby better phenomenological studies can be carried out and better agreement with experimental data is usually obtained. More recent studies have further developed and refined this topic \cite{ster97,vogt1,ridolfi,kidonakis,vogel,RAVI}. In this paper we adopt the effective field theory approach (EFT) to resum the threshold large logarithms. This approach was first applied to deep inelastic (non-singlet) structure function in the limit $x\rightarrow 1$ where $x$ is the partonic Bjorken variable \cite{Man03}. Later on it was applied to the DY process in the limit $z\rightarrow 1$ \cite{IdiJi05}. In both cases, resummation was performed up to next-to-leading logarithms (NLL). The implementation of the EFT methodology to resum threshold logarithms is made more concrete due to the recently developed ``soft collinear effective theory'' (SCET) \cite{SCET,SCET1}. The SCET describes interactions between soft and collinear partons. It is the most appropriate framework to calculate contributions from the soft-collinear limit of the full QCD calculations (which is more commonly known as the ``soft limit''). Therefore any perturbative calculations within SCET has to reproduce the same results as the full QCD in that limit. To ${\cal O}(\alpha_s)$, this has been verified explicitly for DIS and DY \cite{Man03,IdiJi05}. This is also the case when one considers distributions at small transverse momentum \cite{Feng1,Gao}. Moreover, it was also shown that the one loop diagrams (the form factor type of diagrams) calculated in SCET have the same infrared (IR) pole structure as the full QCD calculation \cite{Man03,Wei}. [The result in \cite{Man03} for the collinear diagrams involve mixed poles of IR and ultraviolet (UV) divergences. This would be treated by applying the ``zero-bin'' subtraction \cite{Man06}]. These observations have to be valid to higher orders in the strong coupling. This allows us to extract the relevant quantities needed to perform resummation from the full QCD calculations as we shall see below. The EFT resummation program described here is conceptually simple and has been explained in detail in \cite{JiPRL}. The starting point (and again considering DY as an example) is the collinearly factorized inclusive cross section in moment space \cite{Catani:2003zt} \begin{equation} \label{sigmaN} \sigma_N = \sigma_0\cdot G_N(Q)\cdot q(Q,N)\cdot q(Q,N), \end{equation} where $\sigma_0$ is the Born level cross section, $q(Q,N)$ is the PDF of partons in hadrons and, \begin{eqnarray} \label{csfac} G_N(Q) &=&\vert C(\alpha_s(Q^2))\vert^2e^{I_1(Q/\mu_I,\alpha_s(Q^2))} \times {\cal M}_N(\alpha_s(Q^2)) e^{I_2(Q/\mu_I,\alpha_s(\mu_I^2))}e^{I_3(Q/\mu_I,\alpha_s(Q))}. \end{eqnarray} Explicit expressions for the various contributions in $G_N$ will be given below; however, we want to comment on their physical origin. $C(\alpha_s(Q^2))$ contains the non-logarithmic contribution of the purely virtual diagrams and the first exponent $I_1$ contains all the logarithms originating from the same type of diagrams. Both quantities are obtained from the matching procedure at the scale $Q$, and the running between $Q$ and the intermediate scale $\mu_I$. This, in turn, is controlled by the anomalous dimension of the EFT current to be denoted by $\gamma_1$. The intermediate scale $\mu_I$ shows up when real gluons are emitted so one must consider the cross section with real gluon emissions. The result will have both soft and collinear divergences. When taking into account the IR poles from the virtual diagrams (in the EFT approach this is done by taking into account the contribution from the counterterms of the effective operators), the total contribution will contain only collinear divergences to be absorbed into a product of two PDFs. The conclusion is that the matching procedure at the intermediate scale is guaranteed to work to all orders in perturbation theory, following the factorization theorem, as long as the EFT used generates the full QCD results in the appropriate kinematical limit and one gets the matching coefficient ${\cal M}_N(\alpha_s(\mu_I))$ which by definition is finite in the non-regulated theory. This quantity has to be free of any logarithms. $I_2$ collects all the the logarithms that are due to the evolution of the PDF between $\mu_I$ and the factorization scale $\mu_F$. This is controlled by the anomalous dimension of the PDF to be denoted by $\gamma_2$ . $I_3$ encodes all the contributions due to the running of the coupling constant between the matching scales ($Q$ and $\mu_I$) and the final factorization scale $\mu_F$. All the large logarithms appear only in the exponents and the term $\vert C(\alpha_s(Q^2))\vert^2{\cal M}_N(\alpha_s(Q^2))$ is free of any large logarithms. In Eq.~(\ref {csfac}) we have chosen $\mu_F=Q$ for simplicity. The above formalism must be contrasted with the more conventional, factorization based one. It will be shown that the EFT approach is equivalent to the other approaches and to all logarithmic accuracies. We will derive all the known ingredients needed to perform threshold resummation to next-to-next-to-next-to leading logarithmic accuracies (${\rm N}^3{\rm LL}$) for DIS non-singlet structure function, DY process and the closely related Standard Model (SM) Higgs production through gluon-gluon fusion into a top quark loop. Moreover, the integrations in Eq.~(\ref{csfac}) are very easy to perform and we have carried out the integrations up to $g^{(3)}$ that resums the NNLL. This calculation is to be compared with the ones explained in, e.g., Appendices A of Refs.~\cite{Catani:2003zt,Vogt288}. In this paper we use dimensional regularization in $d=4-2\varepsilon$ to regulate both the UV and the IR divergences and we utilize the $\overline {\rm MS}$ scheme throughout. This paper is organized as follows. In Sec.II we derive the anomalous dimension of the quark and gluon effective currents up to ${\cal O}(\alpha_s^3)$ and write down the matching coefficients at the scale $Q^2$ up to ${\cal O}(\alpha_s^2)$. In Sec.III we obtain the matching coefficients at $\mu_I^2$ to ${\cal O}(\alpha_s^2)$ and give our final expression for the resummed coefficient function $G_N$. There we also comment on the universality of the functions $f_{(q,g)}$ that enter the quark and gluon anomalous dimensions of the effective operators. In Sec.IV we compare the EFT approach with the conventional one and derive our main result that establishes the full equivalence of the two approaches. From that relation we obtain the recently calculated $D^{(3)}_{(q,g)}$ for DY and Higgs production and ${\cal B}^{(3)}_q$ for DIS. We carry out the integration in the resummed coefficient function to illustrate the simplicity of the EFT results and obtain the well-known functions $g^{(i)}(\lambda)_{(q,g)}$ for $i=1,2,3$. Our conclusions are presented in Sec.V. In the Appendix we write down explicit expressions for soft and virtual limit in full QCD for all the processes we consider up to ${\cal O}(\alpha_s^2)$ in $z$ space and in the conjugate space (for large moments). \section{Anomalous Dimension and Matching Coefficients for Effective Currents} The EFT approach for resummation starts from calculating the contributions at scale $Q^2$. Technically this is done by matching the full QCD theory currents to the EFT currents at the scale $Q^2$ by considering the purely virtual diagrams in the full theory. By doing this, we integrate out the hard modes of virtualities of order $Q^2$. The matching of the currents can be expressed as an operator expansion \begin{equation} J_{\rm QCD}=C(Q^2/\mu^2,\alpha_s(\mu^2)) J_{\rm eff}(\mu) + ..., \end{equation} where $C$ is the matching coefficient, $\mu$ is the factorization or renormalization scale of the effective current and ellipses denote higher-dimensional currents which will be ignored in this work. We will consider the quark vector current $J^\mu = \bar \psi \gamma^\mu\psi$ for DIS and DY cases and the gluon scalar current $J=G^{\mu\nu}G_{\mu\nu}$ for Higgs production in hadron colliders. The anomalous dimensions of the effective currents that control the running (with $\mu$) are defined as \begin{equation} \label{evo1} \gamma_1 (\mu) = -\mu \frac{d\ln J_{\rm eff}}{d\mu}. \end{equation} If the matrix elements of the currents in full QCD are independent of the factorization scale, such as quark vector and axial vector currents, the same anomalous dimensions are obtained from the matching coefficients of the effective currents \begin{equation} \gamma_1 (\mu) = \mu \frac{d\ln C}{d\mu}. \end{equation} The anomalous dimension is a function of both $Q^2/\mu^2$ and $\alpha_s(\mu^2)$. In fact, it can be shown that it is a linear function of $\ln Q^2/\mu^2$ to all orders in perturbation theory \cite{Man03}; \begin{equation} \gamma_1 = A(\alpha_s)\ln Q^2/\mu^2 + B_1(\alpha_s), \label{adim} \end{equation} where $A$ and $B_1$ have expansions in $a_s\equiv\alpha_s(\mu^2)/4\pi$: $A=\sum_ia_s^iA^{(i)}$ and $B_1=\sum_ia_s^i B^{(i)}_1$, and $\alpha_s$ is the renormalized coupling constant. To obtain the anomalous dimensions and the matching coefficients, we consider the simplest matrix element of the full QCD currents between on-shell massless quark and gluon states. They are just the on-shell form factors $F$. Since they are ``physical" observables, there are no UV divergences, but there are IR ones. To all orders in $\alpha_s$, we can write \begin{equation} \label{CS} F= C(Q^2/\mu^2,\alpha_s(\mu^2)) S(Q^2/\mu^2,\alpha_s(\mu^2),1/\epsilon), \end{equation} where $S$ contains only infrared poles in dimensional regularization (i.e., no finite terms). $S$ can be regarded as the matrix element of the effective current $J_{\rm eff}$ after renormalization has already been performed. In the effective theory, Feynman diagrams for $S$ have vanishing contributions in dimensional regularization because there are no scales in the integrals. This can be regarded as the result of cancellation of IR and UV poles. As such, the IR poles in $S$ may be treated as UV poles for the purpose of calculating the anomalous dimension \begin{equation} \gamma_1 (\mu) = -\mu \frac{d\ln S}{d\mu}. \end{equation} Since $C$ does not contain any pole part, we can also write \begin{equation} \gamma_1 (\mu) = - \mu \left.\frac{d\ln F}{d\mu}\right|_{\rm pole~ part}. \end{equation} Therefore, the perturbative results for $F$ up to any loop order can be used to calculate the anomalous dimensions to the same order. The best way to see the physical content of the form factor is to consider a resummed form \cite{CollinsForm,Magnea90,Magnea2001} \begin{equation} \ln F(\alpha_s) = \frac{1}{2} \int^{Q^2/\mu^2}_0 \frac{d\xi}{\xi} \left(K(\alpha_s(\mu),\epsilon) + G(1, \alpha_s(\xi\mu,\epsilon), \epsilon) + \int^1_\xi \frac{d\lambda}{\lambda} A(\alpha_s(\lambda \mu,\epsilon)\right), \end{equation} where $A$ is the anomalous dimension of the $K$ and $G$ functions, \begin{equation} A(\alpha_s) = \mu^2\frac{dG}{d\mu^2} = - \mu^2\frac{dK}{d\mu^2}, \end{equation} and is in fact the same $A$ as in Eq.~(\ref{adim}). $K$ contains only the IR poles, and therefore, the whole $K$-function can be constructed from the perturbative expansion $A=\sum_i a_s^i A^{(i)}$. The function $G$ contains only the hard contribution, and has a perturbative expansion \begin{eqnarray} G(1,\alpha_s,\epsilon) &=& \sum_i a_s^i G^{(i)}(\epsilon). \end{eqnarray} Thus $\ln F$ can be expressed entirely in terms of $G^{(i)}$ and $A^{(i)}$. The anomalous dimensions $A_q$ for the quark vector current (DIS and DY) and $A_g$ for the gluon scalar current (Higgs production) have been calculated up to ${\cal O}(\alpha_s^3)$ \cite{Vogt192}, \begin{eqnarray} \label{a} A_{(q,g)}^{(1)} &=& 4C_{(q,g)}, \nonumber \\ A_{(q,g)}^{(2)} &=& 8C_FC_{(q,g)} \left[\left(\frac{67}{18}-\zeta_2\right)C_A -\frac{5}{9}N_F\right], \nonumber \\ A_{(q,g)}^{(3)} &=& 16C_{(q,g)}\left[C_A^2\left(\frac{245}{24} - \frac{67}{9}\zeta_2 + \frac{11}{6}\zeta_3 + \frac{11}{5}\zeta^2_2\right) + C_F N_F\left(-\frac{55}{24} + 2\zeta_3\right)\right. \nonumber \\ &&\left. + C_A N_F\left(-\frac{209}{108}+ \frac{10}{9}\zeta_2 - \frac{7}{3} \zeta_3\right) + N_F^2\left(-\frac{1}{27}\right)\right], \end{eqnarray} where $C_{(q,g)}=C_F$ for the quark and $C_A$ for the gluon. In this sense $A$ is universal. The expansion coefficients for the $G$ function have been obtained up to 3-loops from explicit calculations of the quark and gluon form factors \cite{Vogt055}: \begin{eqnarray} G_{(q,g)}^{(1)} &=& 2(B_{2,(q,g)}^{(1)}- \delta_g\beta_0) + f_{(q,g)}^{(1)} + \epsilon \tilde G_{(q,g)}^{(1)} + \epsilon^2 \tilde{\tilde G}^{(1)}_{(q,g)}, \nonumber \\ G_{(q,g)}^{(2)} &=& 2(B_{2,(q,g)}^{(2)}- 2\delta_g\beta_1) + f_{(q,g)}^{(2)} + + \beta_0 \tilde G_{(q,g)}^{(1)}+ \epsilon \tilde G_{q,g}^{(2)}, \nonumber \\ G_{(q,g)}^{(3)} &=& 2(B_{2,(q,g)}^{(3)}- 3\delta_g\beta_2) + f_{(q,g)}^{(3)} + + \beta_1 \tilde G_{(q,g)}^{(1)}+ \beta_0 \left[\tilde G_{(q,g)}^{(2)}-\beta_0\tilde {\tilde G}_{(q,g)}^{(1)}\right], \label{gg} \end{eqnarray} where $\delta_g$ is zero for quark and 1 for gluon. The $B_2$'s are the coefficients in front of the delta function $\delta (x-1)$ in the Altarelli-Parisi splitting function and have been calculated to the third order \cite{Vogt192}: \begin{eqnarray} \label{b} B_{2,q}^{(1)} &=& 3C_F, \nonumber \\ B_{2,q}^{(2)} &=& 4C_FC_A\left(\frac{17}{24} + \frac{11}{3}\zeta_2 -3\zeta_3\right) - 4C_F N_F \left(\frac{1}{12} + \frac{2}{3}\zeta_2\right) + 4C_F^2\left(\frac{3}{8} - 3\zeta_2 + 6\zeta_3\right), \nonumber \\ B_{2,q}^{(3)} &=& 16C_AC_FN_F\left(\frac{5}{4} - \frac{167}{54}\zeta_2 + \frac{1}{20}\zeta_2^2 + \frac{25}{18}\zeta_3\right) \nonumber\\ && + 16C_AC_F^2\left(\frac{151}{64} + \zeta_2\zeta_3 - \frac{205}{24}\zeta_2 - \frac{247}{60}\zeta_2^2 + \frac{211}{12}\zeta_3 + \frac{15}{2}\zeta_5\right) \nonumber \\ && - 16C_A^2C_F \left(\frac{1657}{576} - \frac{281}{27}\zeta_2 + \frac{1}{8}\zeta_2^2 + \frac{97}{9}\zeta_3 - \frac{5}{2}\zeta_5\right) \nonumber \\ && - 16 C_FN_F^2\left(\frac{17}{144} - \frac{5}{27}\zeta_2 + \frac{1}{9}\zeta_3\right) \nonumber \\ && - 16C_F^2 N_F \left(\frac{23}{16} - \frac{5}{12}\zeta_2 - \frac{29}{30}\zeta_2^2 + \frac{17}{6}\zeta_3\right) \nonumber \\ && + 16 C_F^3\left(\frac{29}{32} - 2\zeta_2\zeta_3 + \frac{9}{8}\zeta_2 + \frac{18}{5}\zeta_2^2 + \frac{17}{4}\zeta_3 - 15\zeta_5\right), \end{eqnarray} for quarks, and \begin{eqnarray} B_{2,g}^{(1)} &=& \frac{11}{3}C_A - \frac{2}{3}N_F, \nonumber \\ B_{2,g}^{(2)} &=& 4C_AN_F \left(-\frac{2}{3}\right) + 4C_A^2 \left(\frac{8}{3} + 3\zeta_3\right) + 4C_F N_F \left(-\frac{1}{2}\right), \nonumber \\ B_{2,g}^{(3)} &=& 16C_AC_FN_F \left(-\frac{241}{288}\right) + 16C_AN_F^2\frac{29}{288} - 16C_A^2N_F\left(\frac{233}{288} + \frac{1}{6}\zeta_2 + \frac{1}{12}\zeta_2^2 + \frac{5}{3}\zeta_3\right) \nonumber \\ && + 16C_A^3\left(\frac{79}{32} -\zeta_2\zeta_3 + \frac{1}{6}\zeta_2 + \frac{11}{24}\zeta_2^2 + \frac{67}{6}\zeta_3 - 5\zeta_5\right) \nonumber \\ && + 16C_FN_F^2 \frac{11}{144} + 16C_F^2N_F\frac{1}{16}, \end{eqnarray} for gluons. The universal functions $f_{(q,g)}$ are given by \begin{eqnarray} \label{f} f_{(q,g)}^{(1)} &=& 0, \nonumber \\ f_{(q,g)}^{(2)} &=& C_{(q,g)}C_A \left[{808 \over 27} -{22 \over 3}\zeta_2 -28\zeta_3\right]+ C_{(q,g)} N_F\left[-{112 \over 27}+{8 \over 3}\zeta_2\right], \nonumber \\ f_{(q,g)}^{(3)} &=& C_{(q,g)}C_A^2\left[\frac{136781}{729}-\frac{12650}{81}\zeta_2 -\frac{1361}{3}\zeta_3 + \frac{352}{5}\zeta^2_2 + \frac{176}{3}\zeta_2\zeta_3 + 192\zeta_5\right] \nonumber \\ &&+C_{(q,g)}C_AN_F\left[-\frac{11842}{729} + \frac{2828}{81}\zeta_2 +\frac{728}{27}\zeta_3 - \frac{96}{5}\zeta_2^2\right] +C_{(q,g)} C_FN_F\left[-\frac{1771}{27} \right. \nonumber \\ && \left.+ 4\zeta_2 +\frac{304}{9}\zeta_3 + \frac{32}{5}\zeta_2^2\right] + C_{(q,g)}N_F^2\left[-\frac{2080}{729} - \frac{40}{27}\zeta_2 + \frac{112}{27}\zeta_3\right]. \end{eqnarray} The tilted functions in Eq.~(\ref{gg}) are not given here since they do not contribute to the anomalous dimension as their contribution to the form factors are canceled (they can be found in \cite{Vogt192}). Finally, the anomalous dimension of the effective currents can be expressed in terms of the $A$ and $G$ functions. If one writes $\gamma_1 =\sum_ia_s^i \gamma_1^{(i)}$, then \begin{equation} \label{ano} \gamma_{1,(q,g)}^{(i)} = A_{(q,g)}^{(i)} \ln Q^2/\mu^2 +B^{(i)}_{1,(q,g)}+2i\delta_g\beta_{i-1}, \end{equation} where \begin{equation} B_{1,(q,g)}^{(i)}=-2B_{2,(q,g)}^{(i)}-f^{(i)}_{(q,g)}. \end{equation} and the QCD $\beta$-function is given by \begin{eqnarray} \beta(a_s)=-\frac{d\ln \alpha_s}{d\ln \mu^2}=\beta_0a_s+\beta_1a_s^2+...., \end{eqnarray} with $\beta_o=11C_A/3-2N_F/3$. The above expression for $\gamma_1$ might work to all orders in perturbation theory. In the gluon case, the last term is present when the anomalous dimension is defined in terms of the matching coefficient $C_g$ and is absent when it is defined in term of the effective current. The anomalous dimensions could also be calculated from the matching coefficient $C(Q^2/\mu^2,\alpha_s(\mu^2))$ extracted from known results of the form factors. First, we take the logarithm of Eq.~(\ref {CS}), \begin{equation} \ln F=\ln C(Q^2/\mu^2)+\ln S(Q^2/\mu^2,1/\epsilon). \end{equation} Then we separate out the poles from the form factor logarithms, which belong to the $S(Q^2/\mu^2,1/\epsilon)$. The finite part left over is just the logarithm of the matching coefficient $\ln C$ to any desired order. So, eventually, we will get the following result for the anomalous dimension valid to arbitrary order in $\alpha_s$, \begin{equation} \gamma_1=\frac{d}{d\ln\mu}\left\{\ln F|_{\rm finite~part}\right\}, \end{equation} where the form factor in the above equation has been renormalized (including coupling constant renormalization). Using the above equation, we have calculated the anomalous dimension for the quark and gluon currents in the effective theory up to three-loop order, and they are exactly the same as Eq.~(\ref {ano}). We should point out here that the anomalous dimension of the quark current is the same for the scattering case (DIS) and for the annihilation case (DY). To calculate the matching coefficients, $C(Q^2/\mu^2,\alpha_s(\mu^2))=\sum_ia_s^i(\mu^2)C^{(i)}(Q^2/\mu^2)$ for DIS, DY and Higgs production, we need the expressions for the quark form factor (space-like case and time-like case) \cite{Matsuura} , and for SM Higgs production \cite{Sall,Catani01,Har01} up to the same order. It should be noted that for DY and Higgs cases, $C^{(i)}$ contains imaginary parts that need be taken into account. For our purposes, it is enough to keep the imaginary part for $C^{(1)}$ only. Normalizing $C^{(0)}$ to $1$ we find for DIS, \begin{eqnarray} C^{(1)}_{\rm DIS}(Q^2/\mu^2)&=&C_F\left[-\ln^2\left(\frac{Q^2}{\mu^2}\right)+3\ln\left(\frac{Q^2}{\mu^2}\right)-8+\zeta_2\right],\nonumber \\ C^{(2)}_{\rm DIS}(Q^2/\mu^2)&=& C_F^2\left[\frac{1}{2}\left(\ln^2\left(\frac{Q^2}{\mu^2}\right)-3\ln\left(\frac{Q^2}{\mu^2}\right)+8-\zeta_2\right)^2\right.\nonumber\\ &&\left.+ \left(\frac{3}{2}-12\zeta_2+24 \zeta_3\right)\ln\left(\frac{Q^2}{\mu^2}\right)-\frac{1}{8}+29\zeta_2-30\zeta_3-\frac{44}{5}\zeta_2^2\right]\nonumber\\ &&+C_FN_F\left[-\frac{2}{9}\ln^3\left(\frac{Q^2}{\mu^2}\right)+\frac{19}{9}\ln^2\left(\frac{Q^2}{\mu^2}\right)-\left(\frac{209}{27}+\frac{4}{3}\zeta_2\right)\ln\left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&\left.+\frac{4085}{324}+\frac{23}{9}\zeta_2+\frac{2}{9}\zeta_3\right]\nonumber\\ &&+C_FC_A\left[\frac{11}{9}\ln^3\left(\frac{Q^2}{\mu^2}\right)+\left(2\zeta_2-\frac{233}{18}\right)\ln^2\left(\frac{Q^2}{\mu^2}\right)\right.\\ &&\left.+\left(\frac{2545}{54}+\frac{22}{3}\zeta_2-26\zeta_3\right)\ln\left(\frac{Q^2}{\mu^2}\right)-\frac{51157}{648}-\frac{337}{18}\zeta_2 +\frac{313}{9}\zeta_3+\frac{44}{5}\zeta_2^2\right].\nonumber \end{eqnarray} The logarithms in the above result have been presented in \cite{Feng1}. For DY we can simply get the $C^{(i)}_{q}$ by replacing each $\ln\left(\frac{Q^2}{\mu^2}\right)$ in $C^{(i)}_{\rm DIS}$ with $\ln\left(\frac{Q^2}{\mu^2}\right)-i\pi$. This is just a result of the fact that the time-like quark form factor can be obtained from the space-like one by analytic continuation. For the Higgs production we set $M_H^2=Q^2$ and we get \begin{eqnarray} C^{(1)}_g(Q^2/\mu^2)&=&C_A\left[-\ln^2\left(\frac{Q^2}{\mu^2}\right)+7\zeta_2+2i\pi^2\ln\left(\frac{Q^2}{\mu^2}\right) \right],\nonumber\\ {\rm Re}[C^{(2)}_g(Q^2/\mu^2)]&=&C_A^2\left[{1 \over 2}\ln^4\left(\frac{Q^2}{\mu^2}\right)+{11\over 9}\ln^3\left(\frac{Q^2}{\mu^2}\right)-\left({67 \over 9}-17\zeta_2\right)\ln^2\left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&\left.+\left({80\over 27}-{88\over 3}\zeta_2-2\zeta_3\right)\ln\left(\frac{Q^2}{\mu^2}\right)+{5105 \over 162}+{335\over 6}\zeta_2-{143\over 9}\zeta_3+{125\over 10}\zeta_2^2\right]\nonumber\\ &&+C_AN_F\left[-{2\over9}\ln^3\left(\frac{Q^2}{\mu^2}\right)+{10\over 9}\ln^2\left(\frac{Q^2}{\mu^2}\right)+\left({52\over 27}+{16\over 3}\zeta_2\right)\ln\left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&\left.-{916\over 81}-{25\over 3}\zeta_2-{46\over 9}\zeta_3\right]\nonumber\\ &&+C_FN_F\left[2\ln^2\left(\frac{Q^2}{\mu^2}\right) -{67\over 6}+8\zeta_3\right]. \end{eqnarray} The logarithms in $C^{(i)}$ will be needed later on to show that the matching coefficients at $\mu_I$ are free of any logarithms, and we have not included the imaginary part of $C^{(2)}_g$ since it does not contribute to the accuracy in which we are interested. Using Eq.~(\ref{evo1}) we can write down the solution of the renormalization group equations for DY and Higgs, \begin{equation} \label {ss} C_{(q,g)}(Q^2/\mu_I^2,\alpha_s(\mu_I^2)) = C_{(q,g)}(1,\alpha_s(Q^2))~{\rm exp}{\left[\frac{I_1(Q,\mu_I)}{2}\right]}, \end{equation} where \begin{eqnarray} I_1 &=& -\int_{\mu_I}^Q\tilde \gamma_{1,(q,g)} \frac{d\mu}{\mu} \nonumber \\ \tilde{\gamma}_{1,(q,g)}&=&\gamma_{1,(q,g)}-2i\delta_g\beta_{i-1}. \end{eqnarray} The $C(1,\alpha_s(Q^2))\equiv C(\alpha_s(Q^2))$ is just the non-logarithmic part of $C(Q^2/\mu^2,\alpha_s(\mu^2))$. For the Higgs case the last relation is a result of the $\mu$ dependence of $C_\phi(\mu)$ which enters into the effective lagrangian that one obtains after integrating out the top quark (see, e.g., \cite{Sall}). This $\mu$-dependence is governed by anomalous dimension which we denote by $\gamma_T$ following the notation of \cite{Feng1}. There it was shown that \begin{equation} \gamma_T=a_s[-2\beta_0]+a_s^2[-4\beta_1]\,\,, \end{equation} so the conclusion is that the only effect of this anomalous dimension, when combined with anomalous dimension of the matching coefficient at the scale $Q^2$ is to cancel the $\beta_i$ terms in $\gamma_1$ for the Higgs case. For DIS we replace $C_q$ with $C_{\rm DIS}$ which runs with the same $\gamma_{1,q}$. In Eq.~(\ref {ss}) we encounter the first of three exponentials. The other two will be obtained below. Since $\mu_I^2$ will later be identified with $Q^2/{\overline N}^p$ where $p=1$ for DIS and $p=2$ for DY and Higgs cases, it is clear that the exponential includes large logarithms of the form mentioned in the introduction. We again stress the fact that $C(\alpha_s(Q^2))$ (for all three processes) and $\gamma_{1,(q,g)}$ are completely determined to a given ${\cal O}(\alpha_s^k)$ by the knowledge of the form-factor calculation up to the same order. \section{Matching Coefficients at $\mu_I$ and the Resummed Coefficient Functions} In this section we show how to extract the matching coefficients at the intermediate scale to ${\cal O}(\alpha_s^2)$ for DIS, DY and Higgs production from the known calculations of full QCD, and to obtain resummed expressions for the coefficient functions. Since we are interested in the threshold region, we need to consider only the partonic channels that give rise to the singular contributions in the limit $z\rightarrow 1$, i.e, $\delta(1-z)$ and the ``plus'' distributions, ${\cal D}_i(z)$, where \begin{equation} {\cal D}_i(z)\equiv\left[ \frac{\ln^i(1-z)}{1-z}\right]_+\,. \end{equation} For DIS, DY and the Higgs processes, these channels are: $q+\gamma^*\rightarrow q$, $q+{\bar q}\rightarrow \gamma^*$ and $g+g\rightarrow H$, respectively. To the accuracy we are interested in, the ${\cal O}(\alpha_s^2)$ cross section from soft contributions are needed. The full QCD calculations for cross sections can be found in Refs.~\cite{Matsuura} for DY, in Refs.~\cite{Catani01,Har01} for the Higgs production and in Refs.~\cite{Zil1,Zil2} for DIS. The result in the soft limit can be written as \begin{equation} G^{(\rm {s+v})}(z)\equiv \sum_ia_s^i(\mu^2)G^{(i),(\rm {s+v})}(z) \end{equation} which contains both soft and virtual contributions, and where $G(z)$ is the inverse Mellin transform of $G_N$ in Eq.~(\ref{sigmaN}). Explicit expressions for $G^{(i),(\rm {s+v})}(z)$ with $i=1,2$ can be found in Ref.~\cite{Zil3} for DIS, in Ref.~\cite{Hamberg} for DY and in Ref.~\cite{Smith1,Smith2} for the Higgs production. [$G^{(0)}(z)=\delta(1-z)$]. Using the following well-known Mellin transforms of ${\cal D}_i(z)$ in the large $N$ limit \begin{eqnarray} {\cal D}_0(\overline N)&=&-\ln {\overline N}, \nonumber\\ {\cal D}_1(\overline N)&=& {1\over 2}\ln^2{\overline N}+{1\over 2}\zeta_2, \nonumber \\ {\cal D}_2(\overline N)&=&-{1\over 3}\ln^3{\overline N}-\zeta_2\ln{\overline N}-{2\over 3}\zeta_3,\nonumber \\ {\cal D}_3(\overline N)&=&{1\over 4}\ln^4{\overline N}+{3\over 2}\zeta_2\ln^2{\overline N}+2\zeta_3\ln{\overline N}+{27\over 20}\zeta_2^2, \end{eqnarray} we get the $G^{(i),{\rm (s+v)}}(\overline N)$, $i=0,1,2$. Explicit expressions are given in the Appendix. As we have already mentioned, the SCET is supposed to reproduce the same results. To get the matching coefficient at the intermediate scale $\mu_I$, ${\cal M}_N=\sum_ia_s^i{\cal M}^{(i)}_N$, we need to factorize the virtual contribution from the following relation, \begin{equation} \label {master1} G_N^{({\rm s+v})}\left(\frac{Q^2}{\mu^2},{\overline N},\alpha_s(\mu^2)\right)=\Big| C\left(\frac{Q^2}{\mu^2},\alpha_s(\mu^2)\right)\Big|^2 \times {\cal M}_N\left(\frac{Q^2}{\mu^2},{\overline N},\alpha_s(\mu^2)\right). \end{equation} The content of this formula is simple: The finite part of the partonic cross section $G_N^{({\rm s+v})}$ comes from both the purely virtual, form-factor type of Feynman diagrams, which are included in $\vert C\vert^2 $, and from diagrams with at least one real gluon emitted into the final state, which are included in ${\cal M}_N$. However, there is a different way to look at it. The right-hand side is just the result of a two-step matching of the product of two full QCD currents where at each step we collect the relevant contribution to the cross section. The first step accounts for $\vert C\vert^2 $ and the second one gives rise to ${\cal M}_N$. It should be noted that multiple matching procedure, as the one performed here, results in a multiplicative matching coefficients. We also mention that the above equation could formally be proved, inductively in $\alpha_s$, by considering the cross section within the effective theory itself and relating it to the full QCD calculation in the soft limit. Expanding the above equation to the third order, one gets \begin{eqnarray} \label{s+v} G^{(1),{\rm s+v}}_N&=&2{\rm Re}[C^{(1)}]+{\cal M}^{(1)}_N, \nonumber\\ G^{(2),{\rm s+v}}_N&=&|C^{(1)}|^2+2{\rm Re}[C^{(2)}]+2{\rm Re} [C^{(1)}]{\cal M}_N^{(1)}+{\cal M}_N^{(2)},\\ G^{(3),{\rm s+v}}_N&=&2{\rm Re}[C^{(1)}C^{(2)\ast}]+2{\rm Re}[C^{(3)}]+\vert C^{(1)}\vert^2{\cal M}_N^{(1)}\nonumber \\ && + 2{\rm Re}[C^{(1)}]{\cal M}_N^{(2)}+2{\rm Re}[C^{(2)}]{\cal M}_N^{(1)}+{\cal M}_N^{(3)}.\nonumber \end{eqnarray} The above factorization is consistent with that considered in \cite{MOCH}. We get the following result for DIS, \begin{eqnarray} {\cal M}_{{N,\rm DIS}}^{(1)}&=&C_F\left[2{\rm L}^2+3{\rm L}+7-4\zeta_2\right],\nonumber\\ {\cal M}_{N,{\rm DIS}}^{(2)}&=&C_F^2\left[2{\rm L}^4+6{\rm L}^3+\left(\frac{37}{2}-8\zeta_2\right){\rm L}^2+\left({45\over 2}-24\zeta_2+24\zeta_3\right){\rm L}\right] \nonumber\\ &&+C_FC_A\left[\frac{22}{9}{\rm L}^3+\left(\frac{367}{18}-4\zeta_2\right){\rm L}^2-\left(-\frac{3155}{54}+\frac{22}{3}\zeta_2+40\zeta_3\right){\rm L}\right]\nonumber\\ &&-C_FN_F\left[ \frac{4}{9}{\rm L}^3+\frac{29}{9}{\rm L}^2-\left(\frac{4}{3}\zeta_2-\frac{247}{27}\right){\rm L}\right]\nonumber\\ &&+C_F^2\left[\frac{205}{8}-\frac{97}{2}\zeta_2-6\zeta_3+\frac{122}{5}\zeta_2^2\right] +C_FC_A\left[\frac{53129}{648}-\frac{155}{6}\zeta_2-18\zeta_3-\frac{37}{5}\zeta_2^2\right]\nonumber\\ &&+C_F N_F\left[-\frac{4057}{324}+\frac{13}{3}\zeta_2\right], \end{eqnarray} where ${\rm L}=\ln\frac{\mu^2{\overline N}}{Q^2}$. The above result has also been obtained in \cite {neu1} where an explicit two-loop calculation of a suitably defined jet function was performed \footnote {We thank the authors of Ref.~[39] for pointing out some misprints in Eq.~(34) that appeared in an earlier version of this paper.}. For DY, we get \begin{eqnarray} {\cal M}_{N,q}^{(1)}&=&C_F\left[2{\L}^2+2\zeta_2\right],\nonumber\\ {\cal M}_{N,q}^{(2)}&=&C_F^2\left({1\over 2}\right)\left[2{\L}^2+2\zeta_2\right]^2+C_AC_F\left[\frac{22}{9}{\L}^3+\left(\frac{134}{9}-4\zeta_2\right) {\L}^2+\left(\frac{808}{27}-28\zeta_3\right) {\L}\right]\nonumber\\ &&-C_FN_F\left[ \frac{4}{9}{\L}^3+\frac{20}{9} {\L}^2+\frac{112}{27}{\L}\right]\nonumber\\ &&+C_F C_A\left[\frac{2428}{81}+\frac{67}{9}\zeta_2-\frac{22}{9}\zeta_3-12\zeta_2^2\right]\nonumber\\ &&+C_F N_F\left[-\frac{328}{81}-\frac{10}{9}\zeta_2+\frac{4}{9}\zeta_3\right], \end{eqnarray} where ${\L}=\ln\frac{\mu^2\overline N^2}{Q^2}$. And finally, for the Higgs case, we have \begin{eqnarray} {\cal M}_{N,g}^{(1)}&=&C_A\left[2 {\L}^2+2\zeta_2\right],\nonumber\\ {\cal M}_{N,g}^{(2)}&=&C_A^2\left({1\over 2}\right)[2 {\L}^2+2\zeta_2]^2+C_AC_A\left[\frac{22}{9}{\L}^3+\left(\frac{134}{9}-4\zeta_2\right) {\L}^2+\left(\frac{808}{27}-28\zeta_3\right) {\L}\right]\nonumber\\ &&-C_AN_F\left[ \frac{4}{9}{\L}^3+\frac{20}{9} {\L}^2+\frac{112}{27}{\L}\right]\nonumber\\ &&+ C_AC_A\left[\frac{2428}{81}+\frac{67}{9}\zeta_2-\frac{22}{9}\zeta_3-12\zeta_2^2\right]\nonumber\\ &&+C_A N_F\left[-\frac{328}{81}-\frac{10}{9}\zeta_2+\frac{4}{9}\zeta_3\right]. \end{eqnarray} For all three processes, we have $G_N^{(0)}=1$. From the above results it is clear that the logarithms $\rm L$ and $\L$ vanish when we set: $\mu^2=\mu^2_I\equiv Q^2/{\overline N}^p$. Of course, this has to be the case as the matching coefficients should be logarithmically free, and we can write \begin{eqnarray} {\cal M}_{N}\left(\frac{Q^2}{\mu^2},{\overline N},\alpha_s(\mu^2)\right)={\cal M}_{N}\left(\ln\left(\frac{Q^2}{{\overline N}^p\mu^2}\right),\alpha_s(\mu^2)\right), \end{eqnarray} and for $\mu^2=\mu_I^2\equiv \frac{Q^2}{{\overline N}^p}$ we have \begin{equation} {\cal M}_{N}\left(\ln\left(\frac{Q^2}{{\overline N}^p\mu^2_I}\right),\alpha_s(\mu^2)\right)={\cal M}_N(\alpha_s(\mu_I^2)). \end{equation} These observations are valid to all orders in perturbation theory\cite{Man03} and they lead to a strong constraint on the anomalous dimensions of the effective operators on both sides of the matching scale. Another interesting feature emerges from the results of the DY and Higgs cases, ${\cal M}_{N,q}^{(i)}$ and ${\cal M}_{N,g}^{(i)}$, $i=1,2$: One can simply get the latter from the former by replacing the overall factor $C_F$ with $C_A$ in the non-Abelian part. The Abelian part exponentiates and hence all occurrence of $C_F$ shall be replaced by $C_A$. In this sense, the matching coefficients seem to be universal. This could be argued based on that in the soft gluon limit, only the color charges of annihilating quarks and gluons are relevant. Following the same steps as we did after the first stage matching at $Q^2$, we need now to consider the running of the effective operators that were used to perform the matching at $\mu_I$. However at and below the scale $\mu_I$ they are just the conventional PDFs taken to the limit $z\rightarrow 1$. As such, the running of the effective operators (the PDFs) is governed by the well-known DGLAP (Dokshitzer-Gribov-Lipatov-Altarelli-Parizi) evolution equation with anomalous dimension \begin{eqnarray} \gamma^N_{2,{(q,g)}}=A_{(q,g)}\ln {\overline N}^2-2B_{2,{(q,g)}}, \end{eqnarray} where $A_{(q,g)}$ and $B_{2,{(q,g)}}$ are given in Eqs.~(\ref {a}) and (\ref {b}). We include the running effects in \begin{eqnarray} I_2=2\int_{\mu_I}^{\mu_F}\frac{d\mu}{\mu}\gamma_{2,{(q,g)}}, \end{eqnarray} where $\mu_F$ is the factorization scale for parton distributions. The resummed factorization coefficient functions for DY and Higgs are \begin{eqnarray} \label{asd} G_{N,(q,g)}(Q) &=& \vert C_{(q,g)}(\alpha_s(Q))\vert^2 e^{I_1(Q,\mu_I)} \times {\cal M}_{N,(q,g)}(\alpha_s(\mu_I)) e^{I_2(\mu_I,\mu_F)}, \end{eqnarray} where we have omitted $C_\phi^2$ for Higgs production. [The definition of $I_1$ and $I_2$ differs by a minus sign from Ref. \cite{IdiJi05}.] Anticipating the discussion of the next section, we will set the factorization scale $\mu_F=Q$. The above equation can be brought into an equivalent form by exploiting the running of $\alpha_s$ from $\mu_I$ to $Q$ in ${\cal M}_{N,(q,g)}(\alpha_s(\mu_I))$; \begin{eqnarray} {\cal M}_{N,(q,g)}(\alpha_s(\mu_I^2))={\cal M}_{N,(q,g)}(\alpha_s(Q^2))~\exp[I_3], \end{eqnarray} where \begin{eqnarray} \label {BC} I_3=-2\int_{\mu_I}^{Q}\frac{d\mu}{\mu}\triangle B_{(q,g)}, \end{eqnarray} where \begin{eqnarray} \triangle B_{(q,g)}\equiv -\beta(\alpha_s)\frac{d\ln {\cal M}_{N,(q,g)}}{d\ln \alpha_s}. \end{eqnarray} The last two equations are also true for the DIS case. Thus we write \begin{equation} \label{csr} G_N(Q) = {\cal F}(\alpha_s(Q)) e^{I(\lambda, \alpha_s(Q))}, \end{equation} where ${\cal F} = \vert C_{(q,g)}(\alpha_s(Q))\vert ^2{\cal M}_{(q,g)}(\alpha_s(Q))$ depends only on $\alpha_s(Q)$. The subscript $N$ of ${\cal M}$ has been omitted since there is not any large logarithmic dependence in the matching coefficients. $I=I_1+I_2+I_3$ is a function of $\lambda=\beta_0\ln \overline N \alpha_s(Q)$ and $\alpha_s(Q)$ with all leading and sub-leading large logarithms resummed. Since the cross section $\sigma_N$ in Eq.~(2) is independent of the intermediate scale $\mu_I$, then from Eq.~(\ref{asd}) and the definitions of $\gamma_1$ and $\gamma_2$ we get the following relation for DY and Higgs; \begin{eqnarray} \frac{d\ln {\cal M}_{N,(q,g)}(\alpha_s(\mu^2),\L)}{d\ln \mu}=\left[2\gamma_2-2\gamma_1\right]_{(q,g)}=2[A\L+f]_{(q,g)}, \end{eqnarray} from which we get \begin{eqnarray} \frac{d\ln {\cal M}_{N,(q,g)}(\alpha_s(\mu^2),\L)}{d\ln \mu}\Big |_{\mu=\mu_I}=2f_{(q,g)}(\alpha_s(\mu_I^2)),~~~~\mu_I=\frac{Q}{\overline N} \end{eqnarray} where $A_{(q,g)}$ are given in Eq.~(\ref {a}) and $f_{(q,g)}$ are given in Eq.~(\ref {f}). The last equation sheds light on the physical meaning of the functions $f_{(q,g)}$: It is the anomalous dimension of the matching coefficient ${\cal M}$ evaluated at the intermediate scale $\mu_I$. Here we see that the universality of these functions could be explained by the fact that ${\cal M}_{(q,g)}$ are themselves universal. The last equation also shows the same $A_{(q,g)}$ appears in the logarithmic parts of $\gamma_{1,(q,g)}$ and $\gamma_{2,(q,g)}$, because otherwise the logarithms at $\mu_I$ do not cancel in ${\cal M}_N$. For DIS a similar analysis is performed, however, we have to consider only one-half of $I_2$ in Eq.~(\ref{asd}) since we match onto a single PDF. With this we get \begin{eqnarray} \frac{d\ln {\cal M}_{N,\rm DIS}(\alpha_s(\mu^2),{\rm L})}{d\ln \mu}=\left[\gamma_2-2\gamma_1\right]_q=2[A{\rm L}+B_2+f]_q, \end{eqnarray} from which we obtain at the intermediate scale, \begin{eqnarray} \frac{d\ln {\cal M}_{N,{\rm DIS}}(\alpha_s(\mu),{\rm L})}{d\ln \mu}\Big |_{\mu=\mu_I}=2[B_2+f]_q(\alpha_s(\mu_I^2)),~~~~\mu_I=\frac{Q}{\sqrt{\overline N}}. \end{eqnarray} Here there is an extra contribution from $B_2$. \section{Comparison With the Traditional Approach and Explicit Results To N$^3$LL Order} In this section we will illustrate the equivalence of the EFT approach and the traditional one which relies on the refactorization of hard processes as we mentioned in the introduction. The renormalon problem in the later approach arises from doing resummation uniformly for all moments, which will necessarily encounter small scale $Q/N^p$ at fixed $Q$ when $N$ is sufficiently large. The EFT approach avoids that by short-cutting the steps when this scale becomes of order $\Lambda_{\rm QCD}$. We will start by showing this first for DY and Higgs production, then will turn to the DIS case. In the last subsection, we give the explicit form of the relevant integrals obtained in the EFT approach. \subsection{Drell-Yan and Higgs} One of the well-known forms used to express the coefficient function for DY and Higgs in moment space is the following \cite{Managano}: \begin{equation} G_N(Q^2)=g_0(\alpha_s(Q^2))e^{I_\bigtriangleup}\Delta C(\alpha_s(Q^2)), \end{equation} where we have normalized the Born term to $1$. The $g_0$ has a conventional expansion form: $g_0=\sum_ia^ig_{0i}$. [In this subsection, we omitted the subscript $q$ and $g$, intended for DY and Higgs production.] The term $\bigtriangleup C$ has the only role of cancelling the non-logarithmic contributions that appear in the exponent. These contributions arise from the various $\zeta$-terms in the Mellin transform of the ``plus'' distributions. The Sudakov exponential term $I_\triangle$ is given by \begin{eqnarray} \label{tria} I_{\triangle}=\int_0^1dz\frac{z^{N-1}-1}{1-z}\left[2\int_{Q^2}^{(1-z)^2Q^2}\frac{d\mu^2}{\mu^2} A(\alpha_(\mu^2)) +D(\alpha_s((1-z)^2Q^2))\right], \end{eqnarray} where, as already mentioned, we set $\mu_F^2=Q^2$. As noted above, $I_\Delta$ contains both a logarithmic and non-logarithmic contribution. The quantities, $g_0$, $A$ and $D$ have the usual expansion in $a_s$ and they are already known up to ${\cal O}(\alpha_s^3)$ \cite{Vogt288}. The $A$ is identical to the logarithmic coefficient in $\gamma_1$ and $\gamma_2$. It is our aim to relate these quantities with those that appear in $G_N$ of Eq.~(\ref {asd}). For this we follow the procedure outlined in Appendices A, B and C of \cite{Catani:2003zt}. The integral in $I_\triangle$ can be rewritten in terms of the already defined $I_1$, $I_2$ and $I_3$, \begin{eqnarray} I\equiv I_1+I_2+I_3=I_{\triangle}+\ln \triangle C(\alpha_s(Q^2)), \end{eqnarray} where the coefficient function $\bigtriangleup C$ does not depend on $\mu_I\sim Q/{\overline N}$. To prove the above relation, we first use following expansion; \begin{equation} \label{z} z^{N-1}-1=-\tilde{\Gamma}\left(1-\frac{\partial}{\partial\ln \overline N}\right)\theta\left(1-z-\frac{1}{\overline N}\right)+{\cal O}(1/{\overline N}), \end{equation} where the $\tilde \Gamma$ function is related to the usual gamma function, \begin{equation} \tilde\Gamma\left(1-\frac{\partial}{\partial\ln\overline N}\right)=1-\Gamma_2\left(\frac{\partial}{\partial\ln\overline N}\right)\left(\frac{\partial}{\partial\ln\overline N}\right)^2, \end{equation} where the first parenthesis in the right-hand side is the argument of the $\Gamma_2$ function, and \begin{eqnarray} \Gamma_2(\epsilon)={1\over \epsilon^2}[1-e^{-\gamma_E\epsilon}\Gamma(1-\epsilon)]= -\frac{1}{2}\zeta_2-\frac{1}{3}\zeta_3\epsilon-{9\over 40}\zeta_2^2\epsilon^2+{\cal O}(\epsilon^3). \end{eqnarray} In Eq.~(\ref{z}) we used $(\partial/\partial\ln N) f(\ln {\overline N})= (\partial/\partial\ln {\overline N})f(\ln {\overline N})$ for an arbitrary function $f$. After some algebra, $I_{\triangle}$ can be expressed as \begin{eqnarray} I_{\triangle}&=&-\tilde{\Gamma}\left(1-\frac{\partial}{\partial\ln\overline N}\right)\left\{\int_{Q^2/{\overline N}^2}^{Q^2}\frac{d\mu^2}{\mu^2}\left[A(\alpha_s(\mu^2))\ln\frac{Q^2}{\mu^2}+\frac{1}{2}D(\mu^2)\right] \right.\nonumber\\ &&+\left.\int_{Q^2}^{Q^2/{\overline N}^2}\frac{d\mu^2}{\mu^2}A(\alpha_s(\mu^2))\ln\overline{N}^2\right\}. \end{eqnarray} The double derivative from $\tilde \Gamma$ acting on the curly bracket above gives a contribution \begin{eqnarray} \Gamma_2\left(\frac{\partial}{\partial \ln {\overline N}}\right)\left[\frac{\partial}{\partial \ln {\overline N}}D(\alpha_s(Q^2/{\overline N}^2))-4A(\alpha_s(Q^2/{\overline N}^2))\right]. \end{eqnarray} To compare $I_\Delta$ with the exponent $I=I_1+I_2+I_3$, we express the latter in the form \begin{eqnarray} I_1+I_2+I_3&=&-\left\{\int_{Q^2/{\overline N}^2}^{Q^2}\frac{d\mu^2}{\mu^2}\left[A(\alpha_s(\mu^2))\ln\frac{Q^2}{\mu^2} +(B_1+{\bigtriangleup B}+2B_2)\right]\right.\nonumber\\ &&\left.+\int_{Q^2}^{Q^2/{\overline N}^2}\frac{d\mu^2}{\mu^2}A(\alpha_s(\mu^2))\ln{\overline N}^2\right\}, \label{i-relation} \end{eqnarray} Matching the two integrals, we get \begin{eqnarray} \label{gamma} &-&\int_{Q^2/{\overline N}^2}^{Q^2}\frac{d\mu^2}{\mu^2}(B_1+\bigtriangleup B+2B_2)(\alpha_s(\mu^2))\nonumber \\~~~~ &=& \Gamma_2\left(\frac{\partial}{\partial \ln {\overline N}}\right)\left[\frac{\partial}{\partial \ln {\overline N}}D(\alpha_s(Q^2/{\overline N}^2))-4A(\alpha_s(Q^2/{\overline N}^2))\right]\nonumber \\ && ~~~-\frac{1}{2}\int_{Q^2/{\overline N}^2}^{Q^2}\frac{d\mu^2}{\mu^2}D(\alpha_s(\mu^2))+\ln \bigtriangleup C(\alpha_s(Q^2)). \end{eqnarray} The above equation can be solved by perturbative expansion in $\alpha_s$. If the equality given in Eq.~(\ref{gamma}) holds to all values of ${\overline N}$, then for ${\overline N}=1$ we get \begin{equation} \ln \bigtriangleup C(\alpha_s(Q^2))=-\Gamma_2(\partial_{\alpha_s})\left[\partial _{\alpha_s}D(\alpha_s(Q^2/{\overline N}^2))-4A(\alpha_s(Q^2/{\overline N}^2))\right]\Bigg|_{{\overline N}=1}, \end{equation} where we follow \cite{Catani:2003zt} and replace the derivative $\partial/\partial \ln {\overline N}$ with $\partial_{\alpha_s}$ where \begin{equation} \partial_{\alpha_s}\equiv 2\frac{d\alpha_s(\mu^2)}{d\ln\mu^2}\frac{\partial}{\partial\alpha_s}=-2\beta(\alpha_s) \alpha_s\frac{\partial}{\partial\alpha_s}. \end{equation} and, hence, \begin{equation} \left(\frac{\partial}{\partial \ln {\overline N}}\right)f(\alpha_s(Q^2/{\overline N}^2))=\partial_{\alpha_s}f(\alpha_s(Q^2/{\overline N}^2)), \end{equation} where $f$ is an arbitrary function. Applying one more $\partial/\partial \ln {\overline N}=\partial_{\alpha_s}$ on both sides of Eq.~(\ref {gamma}) we get our master relation \begin{equation} \label{MAS} 2( B_1+\triangle B+2B_2)(\alpha_s(\mu^2))=D(\alpha_s(\mu^2))+\partial_{\alpha_s}\Gamma_2(\partial_{\alpha_s})\left[4A-\partial_{\alpha_s} D\right](\alpha_s(\mu^2)). \end{equation} which can easily be solved for $D^{(i)}$ order by order in $\alpha_s$. As an example, let us expand both sides up to ${\cal O}(\alpha_s^4)$. First, we work out the expansion of the $\bigtriangleup B$ term. From Eq.~(\ref {BC}), we get \begin{eqnarray} \label{b00} \triangle B^{(0)}_{(q,g)}&=&\triangle B^{(1)}_{(q,g)}=0,\nonumber\\ \bigtriangleup B^{(2)}_{(q,g)}&=&-\beta_0M_{N,(q,g)}^{(1)},\nonumber\\ \bigtriangleup B^{(3)}_{(q,g)}&=&-\beta_0\left[2M_{N,(q,g)}^{(2)}-\left({\cal M}_{N,(q,g)}^{(1)}\right)^2\right]-\beta_1{\cal M}_{N,(q,g)}^{(1)}, \\ \triangle B^{(4)}_{(q,g)}&=&-\beta_0\left[3{\cal M}_{N,(q,g)}^{(3)}-3{\cal M}_{N,(q,g)}^{(1)}{\cal M}_{N(q,g)}^{(2)}+\left ({\cal M}_{N,(q,g)}^{(1)}\right)^3\right]\nonumber \\ && -\beta_1\left[2{\cal M}_{N,(q,g)}^{(2)}-\left({\cal M}_{N,(q,g)}^{(1)}\right)^2\right]\nonumber\\ &&-\beta_2{\cal M}_{N,(q,g)}^{(1)}. \end{eqnarray} Noticing that $B_{1,(q,g)}^{(i)}+2B_{2,(q,g)}^{(i)}=-f^{(i)}_{(q,g)}$ and using the expansion of $\Gamma_2$, we get $D^{(i)}$ \begin{eqnarray} \label{d00} D^{(0)}_{(q,g)}&=&D^{(1)}_{(q,g)}=0,\nonumber\\ D^{(2)}_{(q,g)}&=&-2f_{(q,g)}^{(2)}+2\triangle B^{(2)}_{(q,g)} +4\beta_0\zeta_2 A^{(1)}_{(q,g)}, \nonumber\\ D^{(3)}_{(q,g)}&=&-2f^{(3)}_{(q,g)}+2\triangle B_{(q,g)}^{(3)}+4\zeta_2\beta_1A^{(1)}_{(q,g)}+8\zeta_2\beta_0A^{(2)}_{(q,g)}+{32\over 3}\zeta_3\beta_0^2A^{(1)}_{(q,g)}, \nonumber\\ D^{(4)}_{(q,g)}&=&-2f^{(4)}_{(q,g)}+2\triangle B_{(q,g)}^{(4)}+12\zeta_2\beta_0A^{(3)}_{(q,g)}+8\zeta_2\beta_1A^{(2)}_{(q,g)}+32\zeta_3\beta_0^2A^{(2)}_{(q,g)}\nonumber\\ &&+{80\over 3}\zeta_3\beta_0\beta_1A^{(1)}_{(q,g)}+{216\over 5}\zeta_2^2\beta_0^3 A^{(1)}_{(q,g)}-12\zeta_2\beta_0^2D^{(2)}_{(q,g)}. \end{eqnarray} Thus, apart from the coupling-constant running effects, $D$ is essentially $-2f =2B_1+4B_2$. From the last two equations we see that in order to get $D^{(k)}$, the only same order information needed is $f^{(k)}$. All the quantities needed to calculate $D^{(2)}$ and $D^{(3)}$ are known and we get \begin{eqnarray} D^{(2)}_{(q,g)}&=&C_{(q,g)}\left\{C_A\left(-\frac{101}{27}+\frac{11}{3}\zeta_2+\frac{7}{2}\zeta_3\right)+ N_F\left(\frac{14}{27}-\frac{2}{3}\zeta_2\right)\right\}. \end{eqnarray} \begin{eqnarray} D^{(3)}_{(q,g)}&=& C_{(q,g)}C_A^2\left[-\frac{594058}{729}+\frac{98224}{81}\zeta_2+\frac{40144}{27}\zeta_3 -\frac{2992}{15}\zeta_2^2-\frac{352}{3}\zeta_2\zeta_3-384\zeta_5\right]\nonumber\\ &&+C_{(q,g)}C_AN_F\left[\frac{125252}{729}-\frac{29392}{81}\zeta_2-\frac{2480}{9}\zeta_3 +\frac{736}{15}\zeta_2^2\right]\nonumber\\ &&+C_{(q,g)}C_FN_F\left[\frac{3422}{27}-32\zeta_2- \frac{608}{9}\zeta_3-\frac{64}{5}\zeta_2^2\right]\nonumber\\ &&+C_{(q,g)}N_F^2\left[-\frac{3712}{729}+\frac{640}{27}\zeta_2+\frac{320}{27}\zeta_3\right], \end{eqnarray} where $C_{(q,g)}=C_F$ for the DY case and $C_A$ for the Higgs case. The above results agree with the recent calculation in \cite{Vogt265,Laenen284,RAVI}. The result for the Higgs production has already been reported on in \cite{JiPRL}. The non-logarithmic contribution ${\cal F}_{(q,g)}(Q^2)=\sum_ia^i{\cal F}^{(i)}_{(q,g)}=\vert C(Q^2)\vert^2{\cal M}_N(Q^2)$ can be calculated from the already-known results for $C^{(i)}_{(q,g)}(Q^2)$ and ${\cal M}_{N,(q,g)}^{(i)}(\alpha_s(Q^2))$, or we can simply read them from the well-known results for $G^{i,{\rm (s+v)}}(Q^2)$ through Eq.~(\ref{master1}) and Eq.~(\ref{s+v}); \begin{eqnarray} {\cal F}_q^{(1)}&=&16 C_F (\zeta_2-1), \nonumber\\ {\cal F}_q^{(2)}&=&C_F^2\left[\frac{511}{4}-198\zeta_2-60\zeta_3+\frac{552}{5}\zeta_2^2\right]\nonumber\\ &&+C_FC_A\left[-\frac{1535}{12}+\frac{376}{3}\zeta_2+\frac{604}{9}\zeta_3-\frac{92}{5}\zeta_2^2\right]\nonumber\\ &&+C_FN_F\left[\frac{127}{6}-\frac{64}{3}\zeta_2+\frac{8}{9}\zeta_3\right], \end{eqnarray} for DY lepton-pair production. For the Higgs case, we have \begin{eqnarray} {\cal F}_g^{(1)}&=&16\zeta_2C_A, \nonumber\\ {\cal F}_g^{(2)}&=&C_A^2\left[93+{1072\over 9}\zeta_2-{308\over 9}\zeta_3+92\zeta_2^2\right]\nonumber\\ &&+C_AC_F\left[-\frac{1535}{12}+\frac{376}{3}\zeta_2+\frac{604}{9}\zeta_3-\frac{92}{5}\zeta_2^2\right]\nonumber\\ &&+C_AN_F\left[-\frac{80}{3}-\frac{160}{9}\zeta_2+\frac{88}{9}\zeta_3\right]+C_FN_F\left[-{67\over 3}+16\zeta_3\right]. \end{eqnarray} The above results agree with the ${\rm g}_{01}$ and ${\rm g}_{02}$ in \cite{Vogt265}. The $\gamma_E$ terms in the results of \cite{Vogt265} are due to the use of $N$ instead of ${\overline N}$ as in our case. It is very simple to also reproduce these terms. We also notice that their results for the $g_{0i}$ do not include the contributions from the non-logarithmic terms in $I_\triangle$. \subsection{DIS} For the DIS case there are essentially two major differences. The first is that the $D$ term in $I_\triangle$ is zero to all orders in $\alpha_s$ \cite{ridolfi,gardi}. The second one comes from the ``jet function'' which encodes the effects of collinear gluon emission from the outgoing parton. So for DIS, the traditional approach yields the following expression for the exponent in the coefficient function $G_N(Q^2)$, \begin{eqnarray} I_{\rm DIS}=\int_0^1dz\frac{z^{N-1}-1}{1-z}\left[\int_{Q^2}^{(1-z)Q^2}\frac{d\mu^2}{\mu^2}A_q(\alpha_s(\mu^2))+{\cal B}_q(\alpha_s((1-z)Q^2))\right], \end{eqnarray} where again we set $\mu_F^2=Q^2$. We have used ${\cal B}$ here so that it will not be confused with $B_i$'s introduced earlier. We now follow the same procedure as for the DY case, rewriting \begin{eqnarray} I_{\rm DIS }&=&-\tilde{\Gamma}\left(1-\frac{\partial}{\partial\ln\overline N}\right)\left\{\int_{Q^2/{\overline N}}^{Q^2}\frac{d\mu^2}{\mu^2}\left[A_q(\alpha_s(\mu^2))\ln\frac{Q^2}{\mu^2}+{\cal B}_q(\mu^2)\right] \right.\nonumber\\ &&+\left.\int_{Q^2}^{Q^2/{\overline N}}\frac{d\mu^2}{\mu^2}A_q(\alpha_s(\mu^2))\ln\overline{N}\right\}, \end{eqnarray} On the other hand, our result for DIS reads \begin{eqnarray} I_1+I_2+I_3&=&-\left\{\int_{Q^2/{\overline N}}^{Q^2}\frac{d\mu^2}{\mu^2}\left[A_q(\alpha_s(\mu^2))\ln\frac{Q^2}{\mu^2} +(B_{1,q}+{\bigtriangleup B}_{\rm DIS}+B_{2,q})\right]\right.\nonumber\\ &&\left.+\int_{Q^2}^{Q^2/{\overline N}}\frac{d\mu^2}{\mu^2}A_q(\alpha_s(\mu^2))\ln{\overline N}\right\}. \label{i-relation} \end{eqnarray} Matching the two results above, and noting that \begin{equation} \left(\frac{\partial}{\partial \ln {\overline N}}\right)f(\alpha_s(Q^2/{\overline N}))={1\over 2}\partial_{\alpha_s}f(\alpha_s(Q^2/{\overline N})), \end{equation} we get the final relation between EFT and traditional approaches for the DIS case; \begin{eqnarray} && (B_{1,q}+\bigtriangleup B_{\rm DIS}+B_{2,q})(\alpha_s(\mu^2))\nonumber \\ &=&{\cal B}_q(\alpha_s(\mu^2))+{1\over 2}\partial_{\alpha_s}\Gamma_2\left({1\over 2}\partial_{\alpha_s}\right)\left[A_q-{1\over 2}\partial_{\alpha_s}{\cal B}_q\right](\alpha_s(\mu^2)), \end{eqnarray} from which we can solve for ${\cal B}_q^{(i)}$. Up to third order we have \begin{eqnarray} {\cal B}^{(1)}_q &=& -B_{2,q}^{(1)}, \nonumber \\ {\cal B}^{(2)}_q &=& -B_{2,q}^{(2)} - f_q^{(2)} + \Delta B_{\rm DIS}^{(2)} + \frac{1}{2} \zeta_2\beta_0 A_q^{(1)}, \nonumber \\ {\cal B}^{(3)}_q &=& -B_{2,q}^{(3)} - f_q^{(3)} + \Delta B_{\rm DIS}^{(3)} +\beta_0\zeta_2A^{(2)}_q + \frac{1}{2}\zeta_2\beta_1 A_q^{(1)} + \frac{2}{3}\zeta_3\beta_0^2 A_q^{(1)}. \end{eqnarray} Therefore, apart from running effects, ${\cal B}_q$ is essentially $-B_{2,q}-f_q$. More explicitly, we get \begin{eqnarray} {\cal B}^{(1)}_q&=& -3C_F, \nonumber \\ {\cal B}^{(2)}_q&=& C_F^2\left[-\frac{3}{2}+12\zeta_2-24\zeta_3\right]+C_FC_A\left[-\frac{3155}{54}+\frac{44}{3}\zeta_2+40\zeta_3\right]\nonumber \\ &&+C_FN_F\left[\frac{247}{27}-\frac{8}{3}\zeta_2\right],\nonumber\\ {\cal B}^{(3)}_q&=& C_F^3\left[-\frac{29}{2}-18\zeta_2-68\zeta_3-\frac{288}{5}\zeta_2^2+32\zeta_2\zeta_3+240\zeta_5\right]\nonumber\\ &&+C_AC_F^2\left[-46+287\zeta_2-\frac{712}{3}\zeta_3-\frac{272}{5}\zeta_2^2-16\zeta_2\zeta_3-120\zeta_5\right]\nonumber\\ &&+C_A^2C_F\left[-\frac{599375}{729}+\frac{32126}{81}\zeta_2+\frac{21032}{27}\zeta_3-\frac{652}{15}\zeta_2^2-\frac{176}{3}\zeta_2\zeta_3-232\zeta_5\right]\nonumber\\ &&+C_F^2N_F\left[\frac{5501}{54}-50\zeta_2+\frac{32}{9}\zeta_3\right]+C_FN_F^2\left[-\frac{8714}{729}+\frac{232}{27}\zeta_2-\frac{32}{27}\zeta_3\right] \nonumber\\ &&+C_AC_FN_F\left[\frac{160906}{729}-\frac{9920}{81}\zeta_2-\frac{776}{9}\zeta_3+\frac{208}{15}\zeta_2^2\right]. \end{eqnarray} Those results agree with the ones in Ref.~\cite{Vogt288}. Similar to the case of DY and Higgs, we get after simple calculation \begin{eqnarray} {\cal F}_{\rm DIS}^{(1)}&=&16 C_F (-9-2\zeta_2), \nonumber\\ {\cal F}_{\rm DIS}^{(2)}&=&C_F^2\left[\frac{331}{8}+{111\over 2}\zeta_2-66\zeta_3+\frac{4}{5}\zeta_2^2\right]\nonumber\\ &&+C_FC_A\left[-\frac{5465}{72}-\frac{1139}{18}\zeta_2+\frac{464}{9}\zeta_3+\frac{51}{5}\zeta_2^2\right]\nonumber\\ &&+C_FN_F\left[\frac{457}{36}+\frac{85}{9}\zeta_2+\frac{4}{9}\zeta_3\right]. \end{eqnarray} Again these results agree with $g_{01}^{\rm DIS}$ and $g_{02}^{\rm DIS}$. \subsection{Drell-Yan Coefficient Function Using DIS Parton Distributions} If one calculates the Drell-Yan coefficient function in terms of the DIS parton distributions, one has \begin{eqnarray} \Delta_N &=& G_{N, q}/G_{N, \rm DIS}^2 \nonumber \\ &\sim & \int_0^1dz\frac{z^{N-1}-1}{1-z}\left[2\int_{(1-z)Q^2}^{(1-z)^2Q^2}\frac{d\mu^2}{\mu^2} A_q(\alpha_(\mu^2)) \right.\nonumber \\&& \left.+D_q(\alpha_s((1-z)^2Q^2)-2{\cal B}_q((1-z)Q^2 )\Large\right], \end{eqnarray} We have seen from the last two subsections that if one ignores the running effects, $D_q\sim 2B_1+4B_2$ and ${\cal B}_q \sim B_1+B_2$. Hence the last two terms in the above equation is just $\sim 2B_2$ in EFT, negative of the coefficient in front of $\delta(1-x)$ in the DGLAP splitting function. \subsection{Performing the Integrals} Another way to compare the EFT results with the traditional ones is to carry out the integral $I_1+I_2+I_3$ directly, and compare the final form of the resummed result. We wish also to show that the way we arrive at the final result is much simpler than the existing one in the literature. Specializing for the DY and Higgs case, the integral is then, \begin{eqnarray} \label {simple I} I_1+I_2+I_3&=&\int_{Q^2/{\overline N}^2}^{Q^2}\frac{d\mu^2}{\mu^2}\left[A_{(q,g)}(\alpha_s(\mu^2))\ln\frac{\mu^2{\overline N}^2}{Q^2} -({\bigtriangleup B}_{(q,g)}-f_{(q,g)})\right]. \end{eqnarray} We also need the solution of the renormalization group equation for $\alpha_s(\mu^2)$. Adopting the notation of Ref.~\cite{Catani:2003zt} we have \begin{eqnarray} \alpha_s(\mu^2)&=&\frac{\alpha_s(Q^2)}{l}\left\{1-\frac{\alpha_s(Q^2)}{l}\frac{b_1}{b_o}\ln l\right.\nonumber \\ &+&\left.\left(\frac{\alpha_s(Q^2)}{l}\right)^2\left[\frac{b_1^2}{b_0^2}(\ln^2 l-\ln l +l-1)-\frac{b_2}{b_0}(\ln l-1)\right] +{\cal O}(\alpha_s(Q^2))\right\}, \end{eqnarray} where $l=1+b_0\alpha_s(Q^2)\ln \mu^2/Q^2$ and $b_i=\frac{1}{(4\pi)^{i+1}}\beta_i$. Let us start with the contribution of the $A^{(1)}_{(q,g)}$ term. Changing the integration variable from $\mu^2$ to $l$, this contribution gives \begin{eqnarray} \label{A_1} I_{A_1}&=&\frac{A^{(1)}_{(q,g)}}{4\pi b_0}\int_{1-2\lambda}^{1}\frac{dl}{l}\left\{1-\alpha_s(Q^2)\frac{b_1}{b_0}\frac{\ln l}{l} +\left(\frac{\alpha_s(Q^2)}{l}\right)^2\left[\frac{b_1^2}{b_0^2}\left[\ln^2 l-\ln l+l-1\right]\right.\right.\nonumber\\ &&\left.\left.-\frac{b_2}{b_0}\left(\ln l-1\right)\right]\right\}\left(2\ln {\overline N}+\frac{l-1}{b_0\alpha_s(Q^2)}\right), \end{eqnarray} where $\lambda\equiv b_0\alpha_s(Q^2)\ln {\overline N}$. The last equation includes a pattern that repeats itself when other contributions are included. Taking as a working rule that $\ln {\overline N}\sim (1/\alpha_s(Q^2))$, the last two terms give rise to comparable contributions, however inside the curly brackets we have expansion in $\alpha_s(Q^2)$. Thus the hierarchy is manifest. Carrying out the integrals in Eq.~(\ref{A_1}) is very simple and we get \begin{eqnarray} I_{A_1}&=&\ln {\overline N}\left\{\frac{A^{(1)}_{(q,g)}}{4\pi b_0}\left[\frac{2\lambda+(1-2\lambda)\ln (1-2\lambda)}{\lambda}\right]\right\}\nonumber \\ &+& \frac{A^{(1)}_{(q,g)}b_1}{4\pi b_0^3}\left[2\lambda+\ln (1-2\lambda)+{1\over 2}\ln^2(1-2\lambda)\right]\nonumber\\ &&+\alpha_s(Q^2)\frac{A^{(1)}_{(q,g)}b_1^2}{4\pi b_o^4}\left[2\lambda^2+2\lambda\ln (1-2\lambda)+{1\over 2}\ln^2(1-2\lambda)\right]\frac{1}{1-2\lambda}. \end{eqnarray} Expanding the $\lambda$-terms in the last equation, we get a sum of the form $\alpha_s^n(Q^2)\ln^{n+1} {\overline N}$ from the first term, $\alpha_s^n(Q^2)\ln^n {\overline N}$ from the second term, and $\alpha_s^{n+1}\ln^n {\overline N}$ from the last term. These are commonly called: leading logarithms (LL), next-to-leading logarithms (NLL) and next-to-next-to leading logarithms (NNLL), respectively. Higher logarithmic accuracies follow easily. Consider now the contribution from $A^{(2)}_{(q,g)}$. Similar to the $A^{(1)}$ contribution we get \begin{eqnarray} I_{A_2}&=&\frac{A^{(2)}_{(q,g)}}{(4\pi)^2b_0}\int_{1-2\lambda}^{1}\frac{dl}{l^2}\alpha_s(Q^2)\left[1-2\alpha_s(Q^2)\frac {b_1}{b_o}\frac{\ln l}{l}+{\cal O}(\alpha_s^3(Q^2))\right]\nonumber \\ && ~~\times \left(2\ln {\overline N}+\frac{l-1}{b_0\alpha_s(Q^2)}\right), \end{eqnarray} so we see that $A^{(2)}_{(q,g)}$ does not contribute to the LL but starts from NLL. This contribution is \begin{eqnarray} I_{A_2} &=&-\frac{A^{(2)}_{(q,g)}}{(4\pi)^2b_0^2}\left[2\lambda+\ln (1-2\lambda)\right]-\alpha_s(Q^2)\frac {A^{(2)}_{(q,g)}b_1}{(4\pi)^2b_0^3}\left[2\lambda + 2\lambda^2+\ln (1-2\lambda)\right]. \end{eqnarray} From the $A^{(3)}_{(q,g)}$ term we get \begin{eqnarray} I_{A_3} &=&\frac{A^{(3)}_{(q,g)}}{(4\pi)^3b_0}\int_{1-2\lambda}^{1}\frac{dl}{l^3}\alpha_s^2(Q^2)\left[1+{\cal O}(\alpha_s(Q^2))\right]\left(2\ln {\overline N}+\frac{l-1}{b_0\alpha_s(Q^2)}\right), \end{eqnarray} which is a NNLL contribution; \begin{eqnarray} I_{A_3}&=&\alpha_s(Q^2)\frac{A^{(3)}_{(q,g)}}{(4\pi)^3b_0^2}~\frac{2\lambda}{1-2\lambda}. \end{eqnarray} The contribution from the term $\triangle B^{(i)}-f^{(i)}$ starts at NNLL accuracy since this term vanishes for $i=0,1$. From Eq.~(\ref {d00}) we have $\triangle B^{(2)}_{(q,g)}-f^{(2)}_{(q,g)}=(1/2)(D^{(2)}-4\beta_0\zeta_2A^{(1)}_{(q,g)})$. The contribution of this term gives \begin{eqnarray} I_{B_2}&=&-\frac{1}{(4\pi^2)}\frac{1}{b_0\alpha_s(Q^2)}[\triangle B^{(2)}_{(q,g)}-f^{(2)}_{(q,g)}]\int_{1-2\lambda}^{1}\frac{dl}{l^2}\alpha_s^2(Q^2), \end{eqnarray} which is a NNLL contribution; \begin{eqnarray} I_{B_2}&=&\alpha_s(Q^2)\frac{1}{(4\pi)^2b_0}\left[4\beta_0\zeta_2A^{(1)}_{(q,g)}-D^{(2)}_{(q,g)}\right]~\frac{\lambda} {1- \lambda}. \end{eqnarray} Writing the sum of all contributions already obtained in the form of \begin{eqnarray} I_{A_1} + I_{A_2} + I_{A_3} + I_{B_2} = \ln {\overline N}g^{(1)}_{(q,g)}+g^{(2)}_{(q,g)}+\alpha_s(Q^2)g^{(3)}_{(q,g)}, \end{eqnarray} we get \begin{eqnarray} g^{(1)}_{(q,g)}(\lambda)&=&\frac{A^{(1)}_{(q,g)}}{4\pi b_0}\left[\frac{2\lambda+(1-2\lambda)\ln (1-2\lambda)}{\lambda}\right],\nonumber\\ g^{(2)}_{(q,g)}(\lambda)&=&-\frac {A^{(2)}_{(q,g)}}{(4\pi)^2b_0^2}\left[2\lambda+\ln(1-2\lambda)\right]+\frac{A^{(1)}_{(q,g)}b_1}{4\pi b_0^3}\left[2\lambda+\ln(1-2\lambda)+\frac{1}{2}\ln^2(1-2\lambda)\right],\nonumber \\ g^{(3)}_{(q,g)}(\lambda)&=&\left[\frac{4\zeta_2A^{(1)}_{(q,g)}}{4\pi}-\frac{D^{(2)}_{(q,g)}}{(4\pi)^2b_0}\right]~\frac {\lambda}{1-2\lambda} +\frac{A^{(1)}_{(q,g)}b_1^2}{4\pi b_0^3}\left[2\lambda+2\lambda\ln (1-2\lambda)+{1\over 2}\ln^2 (1-2\lambda)\right]\nonumber\\ &&+\frac{A^{(1)}_{(q,g)}b_2}{4\pi b_0^3}\left[2\lambda+\ln (1-2\lambda)+\frac{2\lambda^2}{1-2\lambda}\right]+\frac{2A^{(3)}_{(q,g)}}{(4\pi)^3b_0^2}~\frac{\lambda^2}{1-2\lambda}\nonumber\\ &&-\frac{A^{(2)}_{(q,g)}b_1}{(4\pi)^2b_0^3}\left[2\lambda+2\lambda^2+\ln (1-2\lambda)\right]~\frac{1}{1-2\lambda}. \end{eqnarray} The above functions sum the large logarithms to LL, NLL and ${\rm N}^2$LL, respectively. It is straightforward to get also the $\alpha_s^2g^{(4)}$ which resumms the ${\rm N}^3$LL. It will contain contributions from $A^{(i)}_{(q,g)}$ up to $i=4$ and from $D^{(2)}_{(q,g)}$ and $D^{(3)}_{(q,g)}$. The yet uncalculated quantity $A^{(4)}_{(q,g)}$ is the only missing piece to complete the ${\rm N}^3$LL resummation program. The above results for $g^{(i)}$ agree with those in \cite{Catani:2003zt,Vogt146}. We remind the reader that we have set the factorization scale and the renormalization scale equal to $Q^2$ and the $\gamma_E$ dependence is hidden in $\overline N$ used throughout. The analysis for the DIS case can be performed similarly and one also gets agreement with the known results. \section{Conclusion} Threshold resummation of logarithmic enhancements due to soft gluon radiation has been performed using the methodology of effective field theory. This method works to any desired (subleading) logarithmic accuracy and it is completely equivalent to the more conventional, factorization-based techniques. This has been illustrated to all three inclusive processes we considered: DIS, DY and the SM Higgs production. Conceptually and technically, however, this approach is much less complicated and it is physically more transparent than other ones. Working perturbatively in moment space (and for large values of $N$) we found that one does \emph{not} need to introduce any additional nonperturbative quantities (other than the conventional PDFs), as is usually the case in the traditional approaches. All the quantities needed to get the resummed coefficient functions are straightforwardly obtained from fixed-order calculations of the form factors (which supply the $C^{(i)}$ and the $\gamma_1^{(i)}$), the Altarelli-Parisi splitting kernels (which supply the $\gamma_2^{(i)}$) and the cross section for real gluon emission in the soft limit (from which we get the ${\cal M}^{(i)}$). It should be mentioned that the given treatment of DIS is applicable only in the Bjorken limit where one takes $Q^2$ to infinity first. However, for finite (but large) values of $Q^2$ where the scale $Q^2(1-x)^2$ would emerge, a different treatment is needed. The method discussed in this paper can be extended straightforwardly to other processes. \section*{ACKNOWLEDGMENTS} We thank J. P. Ma for collaboration at the early stage of this project. A. I. and X. J. was supported by the U. S. Department of Energy via grant DE-FG02-93ER-40762. X. J. is also supported by a grant from National Natural Science Foundation of China. F. Y. thanks Werner Vogelsang for useful discussions related to the subject of the present paper. He is grateful to RIKEN, Brookhaven National Laboratory and the U.S. Department of Energy (contract number DE-AC02-98CH10886) for providing the facilities essential for the completion of his work. {\bf Note:} While this paper is in writing, a paper by T. Becher and M. Neubert appeared in archive (hep-ph/0605050) which also uses EFT to resum the large logarithms for DIS. In their paper, the jet function is similar to the matching coefficient ${\cal M}$ here. \vfill\eject \section*{APPENDIX} In this appendix, we collect the coefficient functions for deep-inelastic scattering, Drell-Yan and the Higgs production (within the large top-quark mass effective theory) to ${\cal O}(\alpha_s^2)$ in the soft limit of full QCD. They are used to extract the matching coefficients ${\cal M}$ in Eqs. (34-36). As we have remarked in the main paper, these results must be reproduced by calculations of an EFT in which only the soft and collinear degrees of freedom are taken into account. For DIS (see Refs.~\cite{Zil1,Zil2}) , Drell-Yan (see Refs.~\cite{Matsuura}) and Higg production (see Refs.~\cite{Catani01,Har01}), we have \begin{eqnarray} G^{(2),{\rm s+v}}_{\rm DIS}(x) &=& C_F^2\left\{\left[16{\cal D}_1(x)+12{\cal D}_0(x)+\delta(1-x)\left(\frac{9}{2}-8\zeta_2\right)\right]\ln^2\left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&+\Big[24{\cal D}_2(x)-12{\cal D}_1(x)-(45+32\zeta_2){\cal D}_0(x)\nonumber\\ &&\left.+\delta(1-x)\left(-\frac{51}{2}-12\zeta_2+40\zeta_3\right)\right]\ln \left( \frac{Q^2}{\mu^2}\right)\nonumber\\ &&\left.+8{\cal D}_3(x)-18{\cal D}_2(x)-(27+32\zeta_2){\cal D}_1(x)+2\times 48 \times \left(-\frac{3}{4}\right)\zeta_3{\cal D}_0(x)\right.\nonumber\\ &&\left.+\left(\frac{51}{2}+36\zeta_2+64\zeta_3\right){\cal D}_0(x)+\delta(1-x)\left(\frac{331}{8}+69\zeta_2-78\zeta_3+6\zeta_2^2\right)\right\}\nonumber \\ &&+C_FN_F\left\{\left[{4\over 3}{\cal D}_0(x)+\delta(1-x)\right]\ln^2\left( \frac{Q^2}{\mu^2}\right)+\left[{8\over 3}{\cal D }_1(x)-{58\over 9}{\cal D}_0(x)\right.\right.\nonumber \\ &&\left.-\delta(1-x)\left({19\over 3}+{16\over 3}\zeta_2\right)\right]\ln \left(\frac{Q^2}{\mu^2}\right) +{4\over 3}{\cal D}_2(x)-{58\over 9}{\cal D}_1(x)+\left({247\over 27}-{8\over 3}\zeta_2\right){\cal D}_0(x)\nonumber\\ &&\left.+\delta(1-x)\left({457\over 36}+{38\over 3}\zeta_2+{4\over 3}\zeta_3\right)\right\}\nonumber\\ &&+C_AC_F\left\{\left[-{22\over 3}{\cal D}_0(x)-{11\over 2}\delta(1-x)\right]\ln^2 \left( \frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&+\left[-{44\over 3}{\cal D}_1(x)+\left({367\over 9}-8\zeta_2\right){\cal D}_0(x)+\left({215\over 6}+{88\over 3}\zeta_2-12\zeta_3\right)\delta(1-x)\right]\ln \left( \frac{Q^2}{\mu^2}\right)\nonumber\\ &&-{22\over 3}{\cal D}_2(x)+\left({367\over 9}-8\zeta_2\right){\cal D}_1(x)+36\zeta_3{\cal D}_0(x) \nonumber \\ && +\left(-{3155\over 54}+{44\over 3}\zeta_2+4\zeta_3\right){\cal D}_0(x)\nonumber\\ &&+\left.\delta(1-x)\left(-{5465\over 72}-{251\over 3}\zeta_2+{140\over 3}\zeta_3+{71\over 5}\zeta_2^2\right)\right\}. \end{eqnarray} \begin{eqnarray} G^{(2),{\rm s+v}}_q(z) &=&C_F^2\left\{\left[64{\cal D}_1(z)+48{\cal D}_0(z)+\delta(1-z)(18-32\zeta_2)\right]\ln^2 \left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&+\left[192{\cal D}_2(z)+96{\cal D}_1(z)-(128+64\zeta_2){\cal D}_0(z)\right.\nonumber\\ &&\left.+\delta(1-z)(-93+24\zeta_2+176\zeta_3)\right]\ln \left(\frac{Q^2}{\mu^2}\right)\nonumber\\ &&+128{\cal D}_3(z)-(256+128\zeta_2){\cal D}_1(z)+256\zeta_3{\cal D}_0(z)\nonumber\\ &&\left.+\delta(1-z)\left(\frac{511}{4}-70\zeta_2-60\zeta_3+{8\over 5}\zeta_2^2\right)\right\}\nonumber\\ &&+C_FN_F\left\{\left[{8\over 3}{\cal D}_0(z)+2\delta(1-z)\right]\ln^2\left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&\left.+\left[{32\over 3}{\cal D}_1(z)-{80\over 9}{\cal D}_0(z)-{34\over 3}\delta(1-z)\right]\ln \left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&\left.+{32\over 3}{\cal D}_2(z)-{160\over 9}{\cal D}_1(z)+\left({224\over 27}-{32\over 3}\zeta_2\right){\cal D}_0(z)+\delta(1-z)\left({127\over 6}-{112\over 9}\zeta_2+8\zeta_3\right)\right\}\nonumber\\ &&+C_AC_F\left\{\left(-{44\over 3}{\cal D}_0(z)-11\delta(1-z)\right)\ln^2 \left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&+\left[-{176\over 3}{\cal D}_1(z)+\left({536\over 9}-16\zeta_2\right){\cal D}_0(z)+\left({193\over 3}-24\zeta_3\right)\delta(1-z)\right]\ln \left(\frac{Q^2}{\mu^2}\right)\nonumber\\ &&-{176\over 3}{\cal D}_2(z)+\left({1072\over 9}-32\zeta_2\right){\cal D}_1(z)+\left(-{1616\over 27}+{176\over 3}\zeta_2+56\zeta_3\right){\cal D}_0(z)\nonumber\\ &&\left.+\delta(1-z)\left(-{1535\over 12}+{592\over 9}\zeta_2+28\zeta_3-{12\over 5}\zeta_2^2\right)\right\}. \\ G^{(2),{\rm s+v}}_g(z) &=&C_A^2\left\{\left[64{\cal D}_1(z)-{44\over 3}{\cal D}_0(z)-32\zeta_2\delta(1-z)\right]\ln^2 \left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&+\left[192{\cal D}_2(z)-{176\over 3}{\cal D}_1(z)+\left({536\over 9}-80\zeta_2\right){\cal D}_0(z)\right.\nonumber\\ &&\left.+\delta(1-z)\left(-24-{88\over 3}\zeta_2+152\zeta_3\right)\right]\ln \left(\frac{Q^2}{\mu^2}\right)\nonumber\\ &&+128{\cal D}_3(z)-{176\over 3}{\cal D}_2(z)+\left({1072\over 9}-160\zeta_2\right){\cal D}_1(z)\nonumber\\ &&+\left(-{1616\over 27}+{176\over 3}\zeta_2+312\zeta_3\right){\cal D}_0(z)\nonumber\\ &&\left.+\delta(1-z)\left(93+{536\over 9}\zeta_2-{220\over 3}\zeta_3-{4\over 5}\zeta_2^2\right)\right\}\nonumber\\ &&+C_FN_F\delta(1-z)\left(4\ln\left(\frac{Q^2}{\mu^2}\right)-{67\over 3}+16\zeta_3\right)\nonumber\\ &&+C_AN_F\left\{\left({8\over 3}{\cal D}_0(z)\right)\ln^2\left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&+\left[{32\over 3}{\cal D}_1(z)-{80\over 9}{\cal D}_0(z)+\delta(1-z)\left(8+{16\over 3}\zeta_2\right)\right]\ln \left(\frac{Q^2}{\mu^2}\right) \end{eqnarray} \begin{eqnarray} &&+{32\over 3}{\cal D}_2(z)-{160\over 9}{\cal D}_1(z)+\left({224\over 27}-{32\over 3}\zeta_2\right){\cal D}_0(z)\nonumber\\ &&\left.+\delta(1-z)\left(-{80\over 3}-{80\over 9}\zeta_2-{8\over 3}\zeta_3\right)\right\}. \end{eqnarray} The Mellin transform of the above functions with respect to their arguments in the large ${\overline N}$ limit are, \begin{eqnarray} G^{(2),{\rm s+v}}_{N,\rm DIS} &=&C_F^2\left\{\left[8\ln^2{\overline N}-12\m+{9\over 2}\right]\ln^2 \left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&+\left[-8\mmm-6\ln^2{\overline N}+(45+8\zeta_2)\m-{51\over 2}-18\zeta_2+24\zeta_3\right]\ln \left(\frac{Q^2}{\mu^2}\right) \nonumber\\ &&+2\mmmm+6\mmm-\left({27\over 2}+4\zeta_2\right)\ln^2{\overline N}\nonumber\\ &&\left.+\left(-{51\over 2}-18\zeta_2+24\zeta_3\right)\m+{331\over 8}+{111\over 2}\zeta_2-66\zeta_3+{4\over 5}\zeta_2^2\right\}\nonumber\\ &&+C_FN_F\left\{\left[-{4\over 3}\m+1\right]\ln^2\left(\frac{Q^2}{\mu^2}\right)+\left({4\over 3}\ln^2{\overline N}+{58\over 9}\m-{19\over 3}-4\zeta_2\right)\ln \left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&-{4\over 9}\mmm-{29\over 9}\ln^2{\overline N}+\left(-{247\over 27}+{4\over 3}\zeta_2\right)\m \nonumber\\ &&\left.+{457\over 36}+{85\over 9}\zeta_2+{4\over 9}\zeta_3\right\}+C_AC_F\left\{\left[{22\over 3}\m-{11\over 2}\right]\ln^2\left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&+\left(-{22\over 3}\ln^2{\overline N}-\left({367\over 9}-8\zeta_2\right)\m+{215\over 6}+22\zeta_2-12\zeta_3\right)\ln\left(\frac{Q^2}{\mu^2}\right)\nonumber\\ &&+{22\over 9}\mmm+\left({367\over 18}-4\zeta_2\right)\ln^2{\overline N}+\left({3155\over 54}-{22\over 3}\zeta_2-40\zeta_3\right)\m\nonumber\\ &&\left.-{5465\over 72}-{1139\over 18}\zeta_2+{464\over 9}\zeta_3+{51\over 5}\zeta_2^2\right\}.\\ G^{(2),{\rm s+v}}_{N,q} &=&C_F^2\left\{\left[32\ln^2{\overline N}-48\m+18\right]\ln^2 \left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&+\left[-64\mmm+48\ln^2{\overline N}+(128-128\zeta_2)\m-93+72\zeta_2+48\zeta_3\right]\ln \left(\frac{Q^2}{\mu^2}\right)\nonumber\\ &&\left.+32\mmmm-(128-128\zeta_2)\ln^2{\overline N}+{511\over 4}-198\zeta_2-60\zeta_3+{552\over 5}\zeta_2^2\right\}\nonumber\\ &&+C_FN_F\left\{\left[-{8\over 3}\m+2\right]\ln^2\left(\frac{Q^2}{\mu^2}\right)+\left[{16\over 3}\ln^2{\overline N}+{80\over 9}\m-{34\over 3}+{16\over 3}\zeta_2\right]\ln \left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&\left.-{32\over 9}\mmm-{80\over 9}\ln^2{\overline N}-{224\over 27}\m+{127\over 6}-{192\over 9}\zeta_2+{8\over 9}\zeta_3\right\} \nonumber\\ &&+C_FC_A\left\{\left[{44\over 3}\m-11\right]\ln^2\left(\frac{Q^2}{\mu^2}\right)\right.\nonumber \end{eqnarray} \begin{eqnarray} &&+\left[-{88\over 3}\ln^2{\overline N}-\left({536\over 9}-16\zeta_2\right)\m+{193\over 3}-{88\over 3}\zeta_2-24\zeta_3\right]\ln\left(\frac{Q^2}{\mu^2}\right)\nonumber\\ &&+{176\over 9}\mmm+\left({536\over 9}-16\zeta_2\right)\ln^2{\overline N}+\left({1616\over 27}-56\zeta_3\right)\m\nonumber\\ &&\left.-{1535\over 12}+{1128\over 9}\zeta_2+{604\over 9}\zeta_3-{92\over 5}\zeta_2^2\right\}. \end{eqnarray} \begin{eqnarray} G^{(2),{\rm s+v}}_{N,g}&=&C_A^2\left\{\left[32\ln^2{\overline N}+{44\over 3}\m\right]\ln^2\left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&+\left[-64\mmm-{176\over 6}\ln^2{\overline N}-\left({536\over 9}+112\zeta_2\right)\m-24-{176\over 3}\zeta_2+24\zeta_3\right]\ln \left(\frac{Q^2}{\mu^2}\right)\nonumber\\ &&+32\mmmm+{176\over 9}\mmm+\left({536\over 9}+112\zeta_2\right)\ln^2{\overline N}\nonumber\\ &&+\left({1616\over 27}-56\zeta_3\right)\m+93+{1072\over 9}\zeta_2-{308\over 9}\zeta_3+92\zeta_2^2\nonumber\\ &&+C_AN_F\left\{\left[-{8\over 3}\m\right]\ln^2 \left(\frac{Q^2}{\mu^2}\right)+\left[{16\over 3}\ln^2{\overline N}+{80\over 9}\m+8+{32\over 3}\zeta_2\right]\ln \left(\frac{Q^2}{\mu^2}\right)\right.\nonumber\\ &&\left.-{32\over 9}\mmm-{80\over 9}\ln^2{\overline N}-{224\over 27}\m-{80\over 3}-{160\over 9}\zeta_2-{88\over 9}\zeta_3\right\}\nonumber\\ &&+C_FN_F\left\{4\ln \left(\frac{Q^2}{\mu^2}\right)-{67\over 3}+16\zeta_3\right\}. \end{eqnarray}
astro-ph/0605569
\section{Introduction} \citet{R97} discovered that the horizontal branch morphologies of NGC~6388 and NGC~6441 were different from those of other metal-rich globular clusters. Although these two clusters have metallicities near ${\rm [Fe/H]} = -0.6$ \citep{A88, Cl05}, they have blue extensions to the horizontal branch in addition to the red horizontal branch components usually seen in metal-rich globular clusters. As a consequence, and unlike other metal-rich globular clusters, NGC~6388 and NGC~6441 have substantial populations of RR Lyrae stars. These RR Lyrae stars are themselves distinguished by having extraordinarily long periods for their metallicities, so that they do not fit the usual pattern of decreasing mean period versus increasing [Fe/H] \citep{L99,P00,P01,P02,P03}. The reasons for the unusual horizontal branch morphologies of these clusters and the unusual characteristics of their RR Lyrae stars have not been established, although several scenarios have been advanced \citep{P97,S98,S02, R02, Ca05}. In addition to harboring this anomalous population of RR Lyrae stars, NGC~6388 and NGC~6441 each contain several type II Cepheids, making them the most metal-rich globular clusters known to contain such stars \citep{P02,P03}. \begin{figure*}[t] \figurenum{1} \epsscale{0.85} \plotone{f1a.eps} \caption{Differential $B$ flux light curves for the 24 NGC~6441 variable stars not found in \citet{P01}, but found in the HST data of \citet{P03}. The data range from HJD 2450959 to 2450968. The order of the data is filled squares (nights 1 and 3), open squares (night4), filled triangles (night 7), open triangles (night 8), filled circles (night 9), and open circles (night 10). Differential fluxes are in arbitrary linear units. } \label{Fig01a} \end{figure*} \begin{figure*}[t] \figurenum{1} \epsscale{0.85} \plotone{f1b.eps} \caption{{\em Continued.} } \label{Fig01b} \end{figure*} \begin{figure*}[t] \figurenum{2} \epsscale{0.85} \plotone{f2.eps} \caption{Differential $B$ flux light curves for the five NGC~6441 variable stars found in neither \citet{P01} nor \citet{P03}. The data range and symbols are as in Figure~1. } \label{fig2} \end{figure*} \citet{P01,P02} used {\sc daophot} \citep{S87} and {\sc allframe} \citep{S94} to analyze CCD images of NGC~6388 and NGC~6441 that were obtained with the 0.9-m telescope at CTIO. Many variable stars were identified in these studies, but the completeness of the variable star searches was low in the most crowded central portions of the clusters. On the other hand, image-subtraction techniques have revealed large numbers of variable stars that had previously gone undetected on the basis of more standard techniques, including {\sc allframe} (\citeauthor{C03} \citeyear{C03}, \citeyear{Co04} and references therein). We have therefore reanalyzed the 0.9-m observations using the ISIS v2.1 image-subtraction package \citep{A00,A98} in order to obtain a more complete inventory of the variable star populations of the clusters. The utility and completeness of the image-subtraction method can itself be evaluated in the case of NGC~6441, since \citet{P03} also studied the variable stars of the inner regions of that cluster using snapshot observations obtained with the WFPC2 camera on the {\em Hubble Space Telescope} (HST). \begin{figure*}[t] \figurenum{3} \epsscale{0.85} \plotone{f3.eps} \caption{Differential $B$ flux light curves for the twelve NGC~6388 variable stars not found in \citet{P02}. The data range and symbols are as in Figure~1. } \label{fig3} \end{figure*} \begin{figure*}[t] \figurenum{4} \epsscale{0.85} \plotone{f4.eps} \caption{Differential $B$ flux light curves for the six suspected NGC~6388 variables. The data range and symbols are as in Figure~1. } \label{fig4} \end{figure*} \section{Observations and Reductions} The details of the observations and processing of the images can be found in \citet{P01, P02}. The ISIS analysis measures the difference in flux for pixels in each image of the time series relative to their flux in a reference image obtained by stacking a suitable subset of images. In the ISIS method, the original images are convolved with a kernel to account for seeing variations and geometrical distortions of the individual frames. We used the 10 $B$ images with the best seeing from each of the NGC~6441 and the NGC~6388 datasets to build up the reference images. This process identified a large number of stars with differential flux values above our threshold. Each was evaluated for the likelihood that it was a variable star and most clearly were not. We then compared our list of possible variables with the list of known variables. Any of our suspect variables that were not previously known variables were closely examined and many were eliminated. Some appeared to be genuinely variable and are reported here as new variables. \begin{deluxetable}{llll} \tablecaption{NGC~6441 HST Variables\label{tbl-1}} \tablewidth{0pt} \tablehead{ \colhead{Variable} & \colhead{ISIS period} & \colhead{HST period} & \colhead{Type}} \startdata V106 &0.361 &0.36092 &RRc\\ V107 &0.746 &0.73891 &RRab\\ V108 &0.344 &0.34419 &RRc\\ V109 &0.365 &0.36455 &RRc\\ V110 &0.769 &0.76867 &RRab\\ V111 &0.743 &0.74464 &RRab\\ V112 &0.614 &0.61419 &RRab\\ V113 &0.586 &0.58845 &RRab\\ V114 &0.675 &0.67389 &RRab\\ V115 &0.860 &0.86311 &RRab\\ V116 &0.582 &0.58229 &RRab\\ V117 &0.745 &0.74529 &RRab\\ V118 &0.979 &0.97923 &RRab or Ceph\\ V119 &0.686 &0.68628 &RRab\\ V120 &0.364 &0.36396 &RRc\\ V121 &0.848 &0.83748 &RRab\\ V122 &0.744 &0.74270 &RRab\\ V123 &0.336 &0.33566 &RRc\\ V124 &0.315 &0.31588 &RRc\\ V125 &0.337 &0.33679 &RRc\\ V140 &0.616 &0.35180 &RR\\ V141 &0.847 &0.84465 &RRab\\ V142 &0.887 &0.88400 &RRab\\ V143 &0.863 &0.86277 &RRab\\ \enddata \end{deluxetable} \begin{deluxetable}{lllll} \tablecaption{NGC~6441 Possible New Variables\label{tbl-2}} \tablewidth{0pt} \tablehead{ \colhead{Variable} & \colhead{ISIS period} & \colhead{RA (2000)} & \colhead{Dec (2000)} & \colhead{Type}} \startdata NV1(V146) &0.402 &17 50 13.15&-37 03 00.4&RRc\\ NV2(V147) &0.355 & 17 50 13.26&-37 02 52.3&RRc\\ NV3(V148) &0.390 &17 50 12.79&-37 02 50.9 &RRc\\ NV4(V149) &0.557 & 17 50 10.06& -37 02 26.5&RR\\ NV5(V150) &0.529 &17 50 07.07 &-37 03 16.1 &RR\\ \enddata \end{deluxetable} \section{Results} In this section we present results for newly discovered variable stars in both NGC~6388 and NGC~6441. In the case of NGC~6441, we also evaluate the ability of ISIS to detect RR Lyrae and Cepheid variables in ground-based data. In particular, we compare the detection of variable stars in the ISIS analysis of images of NGC~6441 taken at CTIO to the detection of variable stars in \citeauthor{P03}~'s (\citeyear{P03}) analysis of WFPC2 observations of the same cluster. Periods for the new variables were determined using the period-search program {\sc kiwi}. {\sc kiwi} follows \citet{lk65} in searching for periodicity by seeking to minimize the total length of the line segments that join adjacent observations in phase space, i.e., to maximize the smoothness of the light curve. (The {\sc kiwi} program was kindly provided to us by Dr. Betty Blanco.) In some cases the {\sc kiwi} periods were adjusted to improve the phase match between different nights. The analyzed data cover 10 nights, spanning about 33 cycles for the shorter-period variables, and about 12 cycles for the longer-period ones. Because of this relatively short time interval, ISIS periods are given to only three significant figures. Differential flux light curves for the variables based on the ISIS analysis and the periods given in Tables~1 and 2 (NGC~6441) and 4 (NGC~6388) are shown in Figures~1 and 2 (NGC~6441) and 3 and 4 (NGC~6388). Differential fluxes can be transformed into standard magnitudes if reliable photometry can be obtained for the variable stars in the reference frame \citep[see, for example,][]{Br03,Mo02,Ba05}. Unfortunately, the newly discovered variables in NGC~6388 and NGC~6441 are in very crowded portions of the field, where the star images are badly blended. Conventional methods of photometry such as {\sc allstar} and {\sc allframe} are in these cases unable to provide the accurate magnitude zero-point on which to base a conversion from differential fluxes to magnitudes. Thus, our analysis will be based upon the differential flux light curves produced by ISIS. Table~1 and Figure~1 refer to NGC~6441 variables previously identified with HST, whereas Table~2 and Figure~2 report on the new NGC~6441 variable candidates detected with ISIS. There is only one instance where the {\sc kiwi} period differs significantly from that found in \citet{P03} (V140, see section 4). Table 2 contains five possible NGC~6441 variables that were not identified in the HST study by \citet{P03}. The WFPC2 photometry was reexamined to determine whether these five variables could be recovered from the HST dataset. All were recovered and could be identified as probable or possible variables. For NV1 and NV2 the WFPC2 photometry is noisy, but is consistent with the periods identified from the ISIS data. The WFPC2 photometry for NV3 is especially noisy, but suggests that the star may be truly variable. For NV4 and NV5, there are only 10 and 6 HST epochs of observation, respectively, but the data are again consistent with the identification of NV4 and NV5 as variable stars. Thus, NV1 through NV5 would become NGC~6441 variables V146 through V150, respectively. The positions of these variables were determined from the WFPC2 images as in \citet{P03}. Three variables found in the HST analysis of \citet{P03} were not found in the ISIS analysis: V136, V138, and V145. All three of these variables are within 10 arcsec of the center of the cluster. In fact, V145 is the variable found by HST that is closest to the center. It would be expected that stars near the very center of a cluster would be the most difficult for ISIS to detect. It is noteworthy in this connection that NGC 6388 and NGC 6441 are both clusters with strong central concentrations of stars \citep[see][]{H96, Tr95}. An RRc classification for the newly discovered variables NV1, NV2, and NV3 seems clear on the basis of period and light curve shape. However, results for the longer period variables NV4 and NV5 are not so clear. Plots of the Fourier decomposition parameters of light curves have proven useful in distinguishing between RRab and RRc type variables, e.g. \citet{P02}. In Figure 5 we plot the Fourier parameters $A_{21}$ versus $\phi_{21}$, based upon a fifth order fit to the differential $B$ light curves. NV4 and NV5, as well as V140, fall between the stars clearly established as belonging to Bailey type ab and the stars clearly established as Bailey type c. We have also calculated the \citet{St87} skewness parameter from the differential light curves. This parameter is defined as Sk = [1/(rise time in phase units)] - 1. Skewness is plotted against $\phi_{21}$ in Figure 6. Here, V140 and NV4 fall among the RRc variables, whereas the location of NV5 is closer to those of the RRab stars. The skewness and Fourier parameters for NGC~6441 variables are listed in Table 3. A few outlying points in the light curves were omitted in calculating the parameters. Parameters were not calculated for V118 because of the large gap in its observed light curve. For NGC~6388, for which no HST variable star analysis has been completed, Table~4 lists variable stars not previously detected in the \citet{P02} study. Light curves for the new variables NV1 through NV12 are shown in Figure~3, whereas Figure~4 shows light curves for additional suspected variables of uncertain variability type. \begin{deluxetable}{llllllll} \tablecaption{Fourier and Skewness Parameters for NGC~6441 Variables\label{tbl-4}} \tablewidth{0pt} \tablehead{ \colhead{Variable} & \colhead{$A_{21}$} & \colhead{$A_{31}$} & \colhead{$A_{41}$} & \colhead{$\phi_{21}$} & \colhead{$\phi_{31}$} & \colhead{$\phi_{41}$} & \colhead{Sk} } \startdata V106 & 0.13 & 0.05 & 0.08 & 3.21 & 5.34 & 3.07 & 1.49\\ V107 & 0.54 & 0.32 & 0.13 & 3.99 & 1.92 & 6.08 & 3.69\\ V108 & 0.15 & 0.10 & 0.05 & 2.99 & 4.93 & 2.72 & 0.96\\ V109 & 0.09 & 0.06 & 0.10 & 3.43 & 5.83 & 3.59 & 1.62\\ V110 & 0.51 & 0.23 & 0.10 & 4.10 & 2.09 & 6.13 & 3.44\\ V111 & 0.42 & 0.33 & 0.09 & 4.22 & 1.00 & 5.61 & 1.78\\ V112 & 0.56 & 0.38 & 0.17 & 4.00 & 1.73 & 6.03 & 4.29\\ V113 & 0.60 & 0.48 & 0.20 & 3.88 & 1.33 & 5.23 & 4.35\\ V114 & 0.66 & 0.31 & 0.19 & 3.68 & 1.61 & 5.00 & 2.34\\ V115 & 0.41 & 0.10 & 0.06 & 3.68 & 2.48 & 5.17 & 2.28\\ V116 & 0.57 & 0.46 & 0.34 & 3.84 & 1.52 & 5.43 & 4.32\\ V117 & 0.54 & 0.19 & 0.18 & 4.09 & 1.95 & 5.79 & 1.93\\ V119 & 0.59 & 0.36 & 0.23 & 3.95 & 1.73 & 5.46 & 3.81\\ V120 & 0.12 & 0.03 & 0.02 & 3.39 & 0.55 & 4.21 & 1.03\\ V121 & 0.44 & 0.25 & 0.08 & 3.84 & 1.69 & 1.78 & 2.70\\ V122 & 0.50 & 0.29 & 0.15 & 3.94 & 1.92 & 5.83 & 3.74\\ V123 & 0.18 & 0.08 & 0.08 & 3.62 & 5.45 & 2.83 & 1.54\\ V124 & 0.09 & 0.05 & 0.07 & 3.63 & 3.83 & 2.64 & 1.38\\ V125 & 0.18 & 0.07 & 0.07 & 3.11 & 5.61 & 2.90 & 1.07\\ V140 & 0.28 & 0.16 & 0.09 & 3.62 & 1.00 & 5.38 & 1.28\\ V141 & 0.15: & 0.23: & 0.18: & 5.23: & 0.44: & 4.54: & 1.35:\\ V142 & 0.35 & 0.15 & 0.11 & 5.23 & 2.94 & 5.24 & 1.98\\ V143 & 0.70: & 0.01: & 0.10: & 3.78: & 4.00: & 5.78: & 2.38\\ V146(NV1) & 0.16 & 0.16 & 0.12 & 1.66 & 6.17 & 0.37 & 0.90\\ V147(NV2) & 0.13 & 0.08 & 0.07 & 3.82 & 6.00 & 2.74 & 0.92\\ V148(NV3) & 0.11 & 0.05 & 0.04 & 3.73 & 0.29 & 0.49 & 1.04\\ V149(NV4) & 0.27 & 0.12 & 0.03 & 3.31 & 0.47 & 4.32 & 1.08\\ V150(NV5) & 0.19 & 0.03 & 0.12 & 4.71 & 5.85 & 5.01 & 1.36\\ \enddata \end{deluxetable} Eighteen new or suspected variables were detected in NGC~6388. The stars labeled NV1 through NV12 seem securely established as variables. They would therefore become NGC~6388 variables V58 through V69, respectively. The nature of the suspected variables, SV1 through SV6, is less certain. Note that SV1 through SV4 are not the same as the suspected variables S1 - S4 listed in \citet{Si94}. Four of these stars are listed as being possible type II Cepheids, based primarily upon the approximate periods indicated by the data. Additional data are needed to confirm this classification. The Delta RA and Delta dec columns of Table 4 give the positions of the new variables in arc seconds relative to the cluster center, and are on the system of \citet{P02}. The Fourier and skewness parameters for the newly discovered RR Lyrae in NGC~6388 are listed in Table~5. Plots of $A_{21}$ versus $\phi_{21}$ and skewness versus $\phi_{21}$ are shown in Figures 7 and 8. In Figure 7 points for known RRab and RRc stars in NGC~6388 are plotted, using the data from Table~6 of \citet{P02}. Although those results were based upon standard $V$ band light curves, they show the same pattern as do the results from our differential flux $B$ light curves. Most of the newly discovered RR Lyrae fall clearly into the RRab or RRc class as indicated in Table~4, but a few cases are still ambiguous. These results clearly illustrate the power of the image subtraction method to detect variable stars in crowded fields. In these particular cases, the ability of image subtraction analysis applied to ground-based images to identify variable stars is comparable to that based on much better resolved HST images, at least as regards variables of relatively large amplitude. Only in the most crowded areas of the cluster, within 10 arcsec of the cluster center, does the ISIS analysis seem to do significantly worse than \citet{P03} in detecting variable stars. In any case, it must be noted that the seeing in our ground-based images was quite mediocre, ranging from $1.1\arcsec$ to $2.5\arcsec$ (with a typical seeing of $1.4\arcsec$). It should be remembered, though, that the ISIS program provides light curves in the form of differential fluxes only, and it is not possible for differential-image techniques alone to recover all the information that can be obtained from direct photometric techniques. In particular, no mathematically rigorous algorithm exists for transforming the ISIS differential fluxes to magnitudes on a fundamental scale without additional photometric information obtained by some other method. \begin{deluxetable}{llrrl} \tablecaption{NGC~6388 Possible New Variables \label{tbl-4}} \tablewidth{0pt} \tablehead{ \colhead{Variable} & \colhead{ISIS period} & \colhead{Delta RA} & \colhead{Delta dec} & \colhead{Type}} \startdata NV1(V58) &0.683 &-27.40 &-7.16 &RRab\\ NV2(V59) &0.589 &6.75 &12.28 &RRab\\ NV3(V60) &0.372 &0.00 &-16.29 &RRc\\ NV4(V61) &0.657 &-12.70 &-5.18 &RRab\\ NV5(V62) &0.708 &-8.73 &-0.42 &RRab\\ NV6(V63) &2.045 &8.52 &-2.21 &Ceph\\ NV7(V64) &0.595 &-1.79 &-9.21 &RRab\\ NV8(V65) &0.395 &-4.77 &16.68 &RRc\\ NV9(V66) &0.350 &-10.08 &-9.16 &RRc\\ NV10(V67) &2.27 &-131.87 &-74.75 &Ceph\\ NV11(V68) &0.946 &11.51 &27.76 &RRab?\\ NV12(V69) &3.60 &3.29 &-10.84 &Ceph?\\ SV1 &~8 &4.26 &7.33 &Ceph?\\ SV2 &0.847 &6.18 &2.90 &RRab?\\ SV3 &0.333 &4.81 &-24.92 &RRc?\\ SV4 &~12 &4.25 &-8.69 &Ceph?\\ SV5 &~4.5 &0.94 &-7.18 &Ceph?\\ SV6 &~7 &-7.78 &-24.34 &Ceph?\\ \enddata \end{deluxetable} \begin{deluxetable}{llllllll} \tablecaption{Fourier and Skewness parameters for NGC~6388 Variables\label{tbl-5}} \tablewidth{0pt} \tablehead{ \colhead{Variable} & \colhead{$A_{21}$} & \colhead{$A_{31}$} & \colhead{$A_{41}$} & \colhead{$\phi_{21}$} & \colhead{$\phi_{31}$} & \colhead{$\phi_{41}$} & \colhead{Sk} } \startdata NV1 & 0.55 & 0.31 & 0.16 & 3.96 & 1.71 & 5.69 & 3.69\\ NV2 & 0.51 & 0.35 & 0.19 & 3.77 & 1.47 & 5.47 & 3.78\\ NV3 & 0.19 & 0.13 & 0.05 & 2.48 & 5.45 & 4.23 & 1.08\\ NV4 & 0.57 & 0.38 & 0.22 & 3.84 & 1.64 & 5.73 & 4.05\\ NV5 & 0.51 & 0.26 & 0.13 & 3.85 & 1.83 & 5.46 & 3.17\\ NV7 & 0.40 & 0.30 & 0.23 & 3.99 & 1.30 & 5.17 & 1.49\\ NV8 & 0.10 & 0.07 & 0.05 & 2.95 & 5.42 & 2.64 & 0.92\\ NV9 & 0.13 & 0.08 & 0.05 & 3.16 & 5.50 & 3.07 & 1.15\\ SV2 & 0.46 & 0.080 & 0.13 & 4.36 & 0.03 & 4.96 & 1.54\\ SV3 & 0.31 & 0.15 & 0.20 & 5.00 & 4.67 & 4.02 & 1.60\\ \enddata \end{deluxetable} \section{Notes on Individual Stars in NGC~6441} V107, V113, V115, V121--- The ISIS period that best phases our data is slightly different from the HST period. This is shown in Table 1. V140 --- The \citet{P03} period does not phase our data. Our smoothest light curve is obtained with a period of 0.616 d. However, the light curve, as seen in Figure 1, is very symmetrical, and, except for the relatively sharp peak, resembles that of an RRc variable. A reanalysis of the WFPC2 photometry from \citet{P03} shows that a period of 0.6141~d fits the data, although the light curve shows the star to spend less time at minimum light than is often the case. Thus, it seems likely that the longer period is correct. Note that at present the longest confirmed period for an RRc variable in a globular cluster is about 0.56~d, with only one candidate RRc having recently been suggested with a period longer than 0.6~d (Contreras et al. 2005). Because the Fourier and skewness parameters for V140 do not give a clear classification, further data are needed to confirm whether V140 is an RRab or RRc star. V141 --- Our smoothest light curve is obtained with a period of 0.457~d, as shown in Figure 1. However, the resulting light curve is unusual looking. A period of 0.847~d gives almost as good a light curve and is more consistent with the period in \citet{P03}. Thus, the longer period has been adopted. V142 --- The HST period phases our data well. However, the resulting light curve is unusual looking. NV1 (V146) --- We could not find a period that phases our data well. The best period we could determine is 0.402 day, which may suggest an RRc variable. We note that, among Oosterhoff type II clusters such as M15, a period near 0.402 day and a light curve showing scatter are sometimes indicators that a star is a double-mode pulsator. However, no double-mode RR Lyrae stars have yet been discovered in NGC~6441 and, since NGC~6441 seems to contain some RRc stars with periods longer than 0.402 day, it is not clear whether one might expect that double-mode RR Lyrae stars in that cluster would have first overtone mode periods near 0.402 day. NV4 (V149) and NV5 (V150)--- These variables have periods of 0.557~d and 0.529~d, again relatively long for RRc-type variables. However, the differential flux light curves are more symmetric than is typical of RRab variables, which would suggest an RRc classification. Long-period RRc variables, while exceedingly rare in globular clusters in general (Catelan 2004 and references therein), have previously been found in both NGC~6388 and NGC~6441 \citep{P01,P02,P03}. \begin{figure}[t] \figurenum{5} \epsscale{0.85} \plotone{f5.eps} \caption{Plot of the Fourier parameter $A_{21}$ versus $\phi_{21}$ for RR Lyrae stars in NGC 6441. } \label{fig5} \end{figure} \section{Discussion} The ISIS analysis of the ground-based observations of NGC 6441, in spite of seeing ranging from $1.1\arcsec$ to $2.5\arcsec$, rediscovered virtually all of the RR Lyrae stars and Cepheids catalogued by \citet{P03}. ISIS also identified a few possible variable candidates not appearing in the Pritzl et al. catalog. This confirms the utility of ISIS for identifying and classifying the brighter variable stars in crowded fields. On the other hand, ISIS does not by itself provide light curves on a standard photometric system, as do reduction routines such as {\sc daophot}, nor does it give astrometric positions as accurate as those obtained from WFPC2. Including the five newly discovered variables, we find that NGC~6441 is now known to contain 68 probable RR Lyrae variables. Accepting NV1, 2, and 3 as RRc variables, and taking account of the uncertainty Bailey type of V140, NGC~6441 contains 42 known RRab stars and 23 known RRc stars, changing the ratio of RRc to all RR Lyrae stars from the value of 0.33 found in \citet{P03} to 0.35. The resultant mean periods are 0.759~d and 0.392~d for the RRab and RRc stars, respectively. If NV4 and NV5 are also actually RRc variables, then the total of RRc stars increases to 25 and their mean period increases to 0.404~d. In either case, NGC~6441 remains among the clusters with the largest values of $\langle P_{ab}\rangle$ and $\langle P_{c}\rangle$. For NGC~6388, five of the new variables seem to be clearly RRab stars and three are RRc stars. Adding these variables to those in \citet{P02} gives a total of 22 probable RR Lyrae stars. Of these, nine are RRab stars and 11 are RRc stars (if, as \citet{P02} suggest, V26 and V34 are excluded as nonmembers). These totals do not include the variables listed as questionable c or ab type stars in Table 3 of \citet{P02}. It is probable that many of the questionable stars are in fact RR Lyrae variables, as indicated in \citet{P02}, but for various reasons the light curves of these stars were noisy or incomplete. If all of the additional stars listed as ``c?" or ``ab?" variables in Table 4 are included, the number of RRc stars goes up by 7 and the number of RRab variables goes up by 2. From the confirmed RRab and RRc variables, \citet{P02} obtained a ratio of RRc to total RR Lyrae stars of 0.67 (or 0.71 if v26 and V34 are included). The new discoveries revise this ratio to 0.55 (without V26 and V34) or 0.59 (with V26 and V34). The resultant mean periods are $\langle P_{ab}\rangle = 0.676$~d (excluding all questionable RRab stars) and $\langle P_{c}\rangle = 0.364$~d (with V26 and V34) or 0.387~d (without V26 and V34). Because the new variables make a significant addition to the RR Lyrae inventory for NGC~6388, we show a revised histogram over period in Figure 9. V26 and V34 have been excluded from this histogram as possible field stars, but all other probable RR Lyrae stars in Table 3 of \citet{P02} have been included. The identification of additional probable and suspected type II Cepheids in NGC~6388 confirms its status as a cluster rich in such variables. This further strengthens the unusual status of NGC~6388 and NGC~6441 as metal-rich globular clusters also rich in type II Cepheids. \begin{figure}[t] \figurenum{6} \epsscale{0.85} \plotone{f6.eps} \caption{Plot of the skewness parameter Sk versus the Fourier phase parameter $\phi_{21}$ for variable stars in NGC 6441. The RR Lyrae stars of uncertain classification, NV4, NV5, and V140, are indicated by triangles. } \label{fig6} \end{figure} The periods of the RRc stars in NGC~6388 and NGC~6441 can be transformed to their fundamental mode equivalents by adding 0.128 to the logarithms of their periods. The resultant histograms for RR Lyrae stars in these two clusters are shown in Figure 10. In the case of NGC~6441, we plot V118 as an RRab star, although \citet{P03} noted that it might possibly be a type II Cepheid. The newly found variables V146, V147, and V148 have been included. The fundamentalized histogram for NGC~6388 climbs toward the short period end, indicating that the hotter side of the RR Lyrae instability strip is more populated than the cooler side. Although the histogram for NGC~6441 shows some peaks, no such overall trend is clearly discernable. Color-magnitude diagrams of the horizontal branches of NGC~6388 and NGC~6441 \citep{P03, B04} show that the density of stars on the blue extensions to the horizontal branch in both cases declines toward the cool side of the instability strip, eventually increasing again when a strong concentration of red horizontal branch stars is reached. If this decline takes place within the instability strip in the case of NGC~6388 but mostly to the cool side of the instability strip in the case of NGC~6441, then the existence of a trend in the NGC~6388 histogram but not in that of NGC~6441 can be understood. For further discussion on the relation between the star distribution in the HR diagram and the resulting period distribution, the reader is referred to \citet{R89}, \citet{Cat04}, and \citet{Cas05}. \begin{figure}[t] \figurenum{7} \epsscale{0.85} \plotone{f7.eps} \caption{The Fourier $A_{21}$ parameter is plotted against $\phi_{21}$ for the newly discovered RR Lyrae stars in NGC 6388. The filled and open circles indicate RRab and RRc stars in NGC~6388, respectively, plotted with data from Table~6 of \citet{P02}. } \label{fig7} \end{figure} Although their metallicities are much higher than those of canonical Oosterhoff type II clusters such as M15 (NGC 7078) or M68 (NGC 4590), the mean periods of RR Lyrae stars in NGC~6388 and NGC~6441 are as large as, or larger than, those in Oosterhoff type II systems. We can compare the fundamentalized histograms of Figure 10 with those of M15 and M68 -- see Figure 1 in \citet{Cas04}. NGC~6441 and, to a lesser extent, NGC~6388 have a higher proportion of RR Lyrae stars with periods longer than 0.8 days. Otherwise, the differences are not greater than are seen among the histograms of more ordinary clusters. The peak toward the shorter period end of the NGC~6388 histogram is similar to that seen in the histogram of M68. M15 also shows a peak toward shorter periods, but, like NGC~6441, the Oosterhoff type II cluster M2 (NGC 7089) shows a relatively flat distribution of RR Lyrae periods. In this sense, it is clear that neither NGC~6388 nor NGC~6441 show sharply peaked period distributions to the same extent that is seen in the case of the Oosterhoff type I cluster M3 (NGC 5272) \citep{R89,Cat04}. \citet{P02} did note that there was one Oosterhoff type II globular cluster that shared some of the peculiarities of NGC~6388 and NGC~6441. The unusual globular cluster $\omega$ Centauri, like NGC~6388 and NGC~6441, contains some RRab and RRc stars of especially long period. In the case of $\omega$ Cen the long period RR Lyrae stars are, however, accompanied by a shorter period RR Lyrae population, so that the mean period of RRab stars in $\omega$ Cen is shorter than in NGC~6388 or NGC~6441. \begin{acknowledgements} H.A.S. thanks the Center for the Study of Cosmic Evolution and the National Science Foundation for partial support of this work under grant AST-0205813. Support for M.C. was provided by Proyecto FONDECYT Regular No. 1030954. Support for B.J.P. was provided through a National Science Foundation CAREER award, AST-9984073. The observations for the NGC~6441 variables were obtained in part with the NASA/ESA Hubble Space Telescope under program SNAP8251. The Space Telescope Science Institute is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. \end{acknowledgements}
0806.0737
\section{Conclusions} \label{sim2:conc} In this paper, the generalization of Cohen and Glashow's Very Special Relativity to curved space-times has been considered. In general, gauging the $SIM(2)$ symmetry, which leaves the preferred null direction $n^\mu$ invariant, does not provide the complete couplings to the gravitational background. One can, however, construct locally $SIM(2)$ invariant Lagrangians from the terms listed in \eqref{sim2:terms}. These terms make use of the standard $SO(3,1)$ covariant derivative and, therefore, do not derive from a standard gauging of $SIM(2)$. Moreover, for a general space-time and/or a generic null vector field $n^\mu$, such Lagrangians do not lead to freely propagating chiral fermions. Instead, for space-times with $SIM(2)$ holonomy, the $SO(3,1)$ covariant derivatives in the Lagrangians coincide with $SIM(2)$ covariant derivatives. In these cases, and if, in addition, the space-time is a vacuum, the Lagrangians describe freely propagating massive chiral fermions, just as in Minkowski space-time. It is essential here that the null vector field $n^\mu$ is not generic, but is such that its direction remains invariant under parallel transport. \section{VSR in Minkowski Space-Time} \label{sim2:flat} Consider the Lorentz algebra $SO(3,1)$ in the form \begin{equation} \label{sim2:Lorentz} [M_{ab},M_{cd}]= \eta_{ad}M_{bc} +\eta_{bc}M_{ad} -\eta_{ac}M_{bd} -\eta_{bd}M_{ac}~, \end{equation} with the metric $\eta_{ab} = \operatorname{diag} (+--\:-)$. In the vector representation, the generators are $(M_{ab})^c_{\:\; d}= 2\delta_{[a}^c \eta_{b]d}$. Throughout the paper, latin indices will denote the components in the local Lorentz frame, whereas greek indices label the space-time coordinates. In this section, the vierbein is fixed to $e^a_\mu=\delta^a_\mu$, but in the subsequent section this will be lifted. In a light cone basis of $SO(3,1)$, defined by $M_{\pm i}= (M_{0i}\pm M_{3i})/\sqrt{2}$, $M_{-+}=M_{03}$, the generators of $SIM(2)$ are $J=M_{12}$, $K=M_{-+}$, $T_i=M_{+i}$ and satisfy the commutation relations \begin{equation} \label{sim2:sim2alg} [T_i, T_j]=0~,\quad [J,T_i]=\epsilon_{ij}T_j~, \quad [K,T_i]= -T_i~,\quad [K,J]=0~. \end{equation} We shall write a general element of $SIM(2)$ in the compact form \begin{equation} \label{sim2:sim2} \frac12 \lambda^{ab} M_{ab} = \lambda J + \tilde{\lambda} K +\lambda^i T_i~, \end{equation} where $\lambda=\lambda^{12}$, $\tilde{\lambda}=\lambda^{-+}$, $\lambda^i=\lambda^{+i}$, remembering that $\lambda^{-i}=0$. The essential property of $SIM(2)$, as defined above, is that it leaves invariant the \emph{direction} of the null vector $n^\mu$ with components $n^+=1$, $n^-=n^i=0$. \begin{equation} \label{sim2:n.trafo} \delta n^\mu = \tilde{\lambda} n^\mu~. \end{equation} The equation of motion for a chiral spinor containing the non-local $SIM(2)$ mass term is \cite{Cohen:2006ir} \begin{equation} \label{sim2:eom.spinor} \left( \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.58em} \partial + \frac{m^2}2 \frac{\ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n}{n\cdot \partial} \right) \nu_L=0~. \end{equation} The field $\nu_L$ propagates as a particle of mass $m$, as one can easily see by squaring the left hand side of \eqref{sim2:eom.spinor} to a Klein-Gordon equation. The dynamics of \eqref{sim2:eom.spinor} was obtained in \cite{Alvarez:2008uy} from a local Lagragian involving auxilliary fields, equivalent to the following, \begin{equation} \label{sim2:Lag.flat} \mathcal{L} = \frac{i}{2} \bar{\nu}_L \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.58em} \partial \nu_L + i \bar{\chi}_L n\cdot \partial \psi_R +\frac12 m \left(\bar{\chi}_L\ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \nu_L + \bar{\psi}_R \nu_L \right) + \text{c.c.} \end{equation} For the sake of clarity, we have indicated the handedness of the various spinor fields as subscripts. The fact that there is a local Lagrangian is not in contradiction to what was said in the introduction. Indeed, for obtaining $SIM(2)$ invariance of \eqref{sim2:Lag.flat} one must pay the price that $\chi_L$ is not a Lorentz spinor, as one can see from the \emph{global} $SIM(2)$ symmetries of \eqref{sim2:Lag.flat}, \begin{equation} \label{sim2:Lag.sym} \delta \nu_L = \frac14 \lambda^{ab} \gamma_{ab} \nu_L~, \quad \delta \psi_R = \frac14 \lambda^{ab} \gamma_{ab} \psi_R~, \quad \delta \chi_L = \frac14 \lambda^{ab} \gamma_{ab} \chi_L -\tilde{\lambda} \chi_L~, \quad \delta n^\mu = \tilde{\lambda} n^\mu~, \end{equation} where $\lambda^{ab}$ is defined by \eqref{sim2:sim2}. The transformations of the conjugate fields $\bar{\nu}$, $\bar{\psi}$ and $\bar{\chi}$ follow from \eqref{sim2:Lag.sym}. There are other local Lagrangians involving auxiliary fields that give rise to the equation of motion \eqref{sim2:eom.spinor}. For example, \begin{equation} \label{sim2:Lag1.flat} \mathcal{L} = \frac{i}{2} \bar{\nu}_L \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.58em} \partial \nu_L + i \bar{\chi}_L n\cdot \partial \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \chi_L + m \bar{\chi}_L \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \nu_L + \text{c.c.} \end{equation} As we have $n\cdot \partial \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n= \frac12 \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.58em} \partial \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n$, the auxiliary field $\chi_L$ appears in \eqref{sim2:Lag1.flat} only in the combination $\ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \chi_L$. This suggests to consider $\nu_R=\ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \chi_L$ and impose the constraint $\ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \nu_R=0$ via a Lagrange multiplier, which is described by the Lagrangian \begin{equation} \label{sim2:Lag2.flat} \mathcal{L} = \frac{i}{2} \bar{\nu} \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.58em} \partial \nu + \frac12 m \bar{\nu} \nu + \bar{\lambda}_R \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \nu_R + \text{c.c.} \end{equation} In this form of the Lagrangian, the neutrino field starts off as a Dirac spinor, $\nu=\nu_L+\nu_R$, with the usual Dirac mass term, but the right-handed component is constrained by the Lorentz breaking term. In the context of the Standard Model, one is naturally led to ask why $\nu_R$ should be sterile in the weak interactions. A similar question arises also in the other local Lagrangians, \eqref{sim2:Lag.flat} and \eqref{sim2:Lag1.flat}, where one may consider possible couplings of the auxiliary fields $\chi_L$ and $\psi_R$ to the weak interaction gauge fields. We shall not address these questions in this paper. \section{SIM(2) in Curved Space-Times} \label{sim2:general} From this point on, let us consider general vierbeins $e^a_\mu$ with zero torsion. Our aim is to generalize the actions given in the previous section such that, first, they are manifestly coordinate invariant, and second, the symmetry transformations \eqref{sim2:Lag.sym} are promoted to \emph{local} symmetries. To start, let us make a little detour and review how space-time symmetries are treated in the tetrad formalism. This helps us to disentangle the space-time from the Lorentz-frame symmetries in Minkowski space-time and to obtain a local $SIM(2)$ frame symmetry. A space-time symmetry is given by a Killing vector field $\xi^\mu$ satisfying \begin{equation} \label{sim2:Killing} \mathcal{L}_\xi g_{\mu\nu} = \nabla_\mu \xi_\nu +\nabla_\nu \xi_\mu =0~. \end{equation} To promote this symmetry to a symmetry of the vierbeins, one combines the coordinate transformation $x'{}^\mu=x^\mu+\xi^\mu$ with a rotation of the local Lorentz frame, such that \begin{equation} \label{sim2:Killing2} \delta_\xi e^a_\mu = -\mathcal{L}_\xi e^a_\mu +\lambda(\xi)^a_{\:\; b} \, e^b_\mu = -\left[ \xi^\nu \nabla_\nu e^a_\mu + (\nabla_\mu \xi^\nu) e^a_\nu \right] +\lambda(\xi)^a_{\:\; b} \,e^b_\mu =0~. \end{equation} Hence, one obtains \begin{equation} \label{sim2:Lorentz.trafo} \lambda(\xi)_{ab} = -(\nabla_\mu \xi_\nu) e^\mu_a e^\nu_b -\xi^\mu \omega_{\mu ab}~, \end{equation} where $\omega_{\mu ab}$ are the spin connections determined by the zero torsion constraints $D_\mu e^a_\nu=\nabla_\mu e^a_\nu +\omega_\mu{}^a_{\:\; b}\, e^b_\nu =0$. The presence of the null vector field $n^\mu$ (with \emph{frame} components $n^+=1$, $n^-=n^i=0$) breaks those space-time symmetries which lead to non-zero matrix elements $\lambda(\xi)^{-i}$ and $\lambda(\xi)^{-+}$. For example, for Minkowski space-time with a constant null vector, considered in the previous section, the remaining symmetries are $T_i$ and $J$, which generate an $E(2)$ subgroup of the Lorentz group, whereas $M_{-i}$ and $K$ are broken. This group can be enhanced to $SIM(2)$ containing $T_i$, $J$ and $K$ by \emph{postulating} that the null vector is always given by $n^\mu=e^\mu_+$. To achieve this, one modifies the transformation law of the frame components $n^a$ under $K$, giving rise to the $SIM(2)$ representation \begin{equation} \label{sim2:n.rep} \Gamma^{(n)}(K)^a_{\:\; b} = K^a_{\:\; b} +\delta^a_b~,\quad \Gamma^{(n)}(T_i)=T_i~, \quad \Gamma^{(n)}(J)=J~. \end{equation} It is straightforward to show that, in this representation, $n^a$ is $SIM(2)$ invariant. At this point, the $SO(3,1)$ of local Lorentz frame rotations has been reduced to $SIM(2)$, because the modified representation $\Gamma^{(n)}$ is not contained in a representation of $SO(3,1)$. It is this symmetry, which we would like to gauge. In contrast to the usual tetrad formalism, where space-time tensors are invariant under frame rotations, $n^\mu$ transforms non-trivially under $SIM(2)$, \begin{equation} \label{sim2:n.mu.trafo} \delta n^\mu = \delta e^\mu_+ = \lambda_+^{\:\; a}\, e^\mu_a = \lambda^{-+}\, e^\mu_+ = \tilde{\lambda} n^\mu~. \end{equation} Coupling $n^\mu$ to a Lorentz spinor $\nu$ makes it necessary to introduce an auxiliary field $\chi$ with an appropriate transformation law such that $(\bar{\nu} \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \chi)$ is $SIM(2)$-invariant. One finds, as in \eqref{sim2:Lag.sym}, \begin{equation} \label{sim2:chi.trafo} \delta \chi = \left( \frac14 \lambda^{ab} \gamma_{ab} -\tilde{\lambda} \right) \chi~, \end{equation} implying the $SIM(2)$ representation \begin{equation} \label{sim2:chi.rep} \Gamma^{(\chi)}(K) = \frac12 \gamma_{-+} -1~,\quad \Gamma^{(\chi)}(T_i)=\frac12 \gamma_{+i}~,\quad \Gamma^{(\chi)}(J)=\frac12 \gamma_{12}~, \end{equation} where the terms formed by the gamma matrices are inherited from the spinor representation of $SO(3,1)$. A $SIM(2)$ covariant derivative can be introduced as \begin{equation} \label{sim2:cov.der} \tilde{D}_\mu = \nabla_\mu + \tilde{\omega}_\mu^{\:\; +i}\,\Gamma(T_i) +\tilde{\omega}_\mu^{\:\; -+}\,\Gamma(K) + \tilde{\omega}_\mu^{\:\; 12}\,\Gamma(J)~, \end{equation} where $\Gamma$ stands for the representation appropriate for the field the derivative acts on. We have adorned the derivative and the gauge fields with a tilde to distinguish them from the usual, $SO(3,1)$ covariant derivative and the spin connections, $D_\mu$ and $\omega_\mu^{\:\; ab}$, respectively. Indeed, one cannot, in general, identify $\omega_\mu^{\:\; ab}$ with $\tilde{\omega}_\mu^{\:\; ab}$, because they do not form a closed set under $SIM(2)$ transformations. This can be seen, e.g.,\ in the transformation of the spin connection $\omega_\mu{}^{-+}$ under $SIM(2)$, \begin{equation} \label{sim2:spin.conn.trafo} \delta \omega_\mu^{\:\; -+} = - \partial_\mu \tilde{\lambda} + \lambda^i\, \omega_{\mu}^{\:\; -i}~. \end{equation} This involves $\omega_{\mu}^{\:\; -i}$, for which there is no corresponding $SIM(2)$ transformation. Hence, in general, the $SIM(2)$ gauge fields do not provide the coupling to gravity. However, for fields, which inherit their representation from $SO(3,1)$, we can use the $SO(3,1)$ covariant derivative, which does provide the coupling to gravity. In the case of the neutrino kinetic term, this is precisely what one wants to do, because the breaking of $SO(3,1)$ should come only from terms in the Lagrangian, which involve $n^\mu$. The unusual fields are $n^\mu$ and $\chi$, which transform in the representations \eqref{sim2:n.rep} and \eqref{sim2:chi.rep}, respectively, but one easily realizes that the combinations $\ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \chi$ and $n^\mu \chi$ transform under $SIM(2)$ as $SO(3,1)$ fields. Hence, the $SO(3,1)$ covariant derivative may act on these combinations. Using the fields $\nu$, $\psi$ and $\chi$, transforming under $SIM(2)$ as in \eqref{sim2:Lag.sym}, one can write down the following, locally $SIM(2)$ invariant terms, \begin{equation} \label{sim2:terms} \bar{\nu} \ensuremath \:\raisebox{0.2ex}{\slash}\hspace{-0.74em} D \nu~, \quad \bar{\psi}\ensuremath \:\raisebox{0.2ex}{\slash}\hspace{-0.74em} D \psi~, \quad \bar{\chi} n^\mu D_\mu \psi~,\quad \bar{\chi} \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \ensuremath \:\raisebox{0.2ex}{\slash}\hspace{-0.74em} D \psi~,\quad \bar{\chi} \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n D_\mu n^\mu \chi~,\quad \bar{\chi} \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \ensuremath \:\raisebox{0.2ex}{\slash}\hspace{-0.74em} D \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \chi~,\quad \bar{\chi} \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \nu~,\quad \bar{\psi} \nu~, \end{equation} as well as their complex conjugates. Other $SIM(2)$ invariant terms, for example $\bar{\chi} \gamma^{\mu\nu\rho} n_\mu (\nabla_\nu n_\rho) \chi$, are equivalent to these. As an aside, we remark that the Lagrangian given in eq.~(12) of \cite{Alvarez:2008uy} is not locally $SIM(2)$ invariant, because the third term on the right hand side of that equation is not. In what follows, we shall assume that $\omega_\mu^{\:\; -i}=0$, in which case we can identify the $SIM(2)$ gauge fields with the remaining spin connections, and $D_\mu$ agrees with $\tilde{D}_\mu$ when acting on Lorentz fields, \begin{equation} \label{sim2:omega.-i} \omega_\mu^{\:\; -i}=0~:\qquad \tilde{\omega}_\mu^{\:\; ab}=\omega_\mu^{\:\; ab}~,\qquad D_\mu \to \tilde{D}_\mu~. \end{equation} We should interpret this assumption in the sense that we consider those space-times, in which one can choose a Lorentz frame such that \eqref{sim2:omega.-i} holds. This is not the generic case, since the six $SO(3,1)$ frame rotations do not suffice to eliminate the eight components $\omega_\mu^{\:\; -i}$. If, however, such a frame exists, then the condition \eqref{sim2:omega.-i} is $SIM(2)$ invariant. The assumption \eqref{sim2:omega.-i} can be rephrased as $SIM(2)$ holonomy. This follows from the fact that the $SIM(2)$ covariant derivative of $n^\mu$ vanishes implying that $n^\mu$ is a recurrent null vector field, \begin{equation} \label{sim2:tDn} (\tilde{D}_\mu n^\nu) = (\nabla_\mu n^\nu) + \omega_\mu^{\:\;-+}\, n^\nu=0~. \end{equation} A corollary of \eqref{sim2:omega.-i} is \begin{equation} \label{sim2:R.-i} R_{\mu\nu}^{\:\;\ei -i} = 0~. \end{equation} For more information on space-times with $SIM(2)$ holonomy, we refer to \cite{Gibbons:2007zu} and references therein. In the remainder, we shall consider the generalization of VSR to space-times with $SIM(2)$ holonomy, i.e.,\ satisfying \eqref{sim2:omega.-i}. In addition, we assume the space-time to be a vacuum solution of Einstein's equations, possibly with a cosmological constant, such that $R_{\mu\nu}=\Lambda g_{\mu\nu}$. The commutator of two $SIM(2)$ covariant derivatives, which will be used below, reflects the $SIM(2)$ holonomy, \begin{equation} \label{sim2:tDD} \left[\tilde{D}_\mu, \tilde{D}_\nu \right] = \left[\nabla_\mu, \nabla_\nu\right] + R_{\mu\nu}^{\:\;\ei -+}\, \Gamma(K) + R_{\mu\nu}^{\:\;\ei 12}\, \Gamma(J) + R_{\mu\nu}^{\:\;\ei +i}\, \Gamma(T_i)~. \end{equation} Let us consider the simplest Lagrangian, which is given by \eqref{sim2:Lag2.flat}, with $\partial_\mu$ replaced by $D_\mu=\tilde{D}_\mu$ (as it acts on a Lorentz spinor), \begin{equation} \label{sim2:Lag.gen} \mathcal{L} = \frac{i}{2} \bar{\nu} \tilde{\ensuremath \:\raisebox{0.2ex}{\slash}\hspace{-0.74em} D} \nu + \frac12 m \bar{\nu} \nu + \bar{\lambda}_R \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \nu_R + \text{c.c.} \end{equation} It gives rise to the equations of motion \begin{align} \label{sim2:eom.a} i \tilde{\ensuremath \:\raisebox{0.2ex}{\slash}\hspace{-0.74em} D} \nu_L + m \nu_R &=0~,\\ \label{sim2:eom.b} i \tilde{\ensuremath \:\raisebox{0.2ex}{\slash}\hspace{-0.74em} D} \nu_R + m \nu_L +\ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \lambda_R &=0~,\\ \label{sim2:eom.c} \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \nu_R &=0~. \end{align} After multiplying \eqref{sim2:eom.b} by $\ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n$ and using \eqref{sim2:tDn} and \eqref{sim2:eom.c}, one obtains \begin{equation} \label{sim2:nuR.sol} \nu_R= \frac{im}{2n^\mu \tilde{D}_\mu} \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \nu_L~. \end{equation} Substituting \eqref{sim2:nuR.sol} back into \eqref{sim2:eom.b} and making use of $\ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \tilde{\ensuremath \:\raisebox{0.2ex}{\slash}\hspace{-0.74em} D} \nu_L=0$, which follows from \eqref{sim2:eom.a}, yields \begin{equation} \label{sim2:n.lambda} \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \lambda_R = \frac{i}{n^\rho \tilde{D}_\rho} \gamma^\mu n^\nu \left[\tilde{D}_\mu, \tilde{D}_\nu \right] \nu_R~. \end{equation} As $\nu_R$ is a Lorentz spinor and satisfies $\gamma_+\nu_R=\ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \nu_R=0$, \eqref{sim2:n.lambda} becomes \begin{equation} \label{sim2:n.lambda2} \ensuremath \,\raisebox{0.2ex}{\slash}\hspace{-0.6em} n \lambda_R = \frac{i}{2n^\rho \tilde{D}_\rho} \gamma^\mu n^\nu \left( R_{\mu\nu}^{\:\; \:\; 12}\, \gamma_{12} -R_{\mu\nu}^{\:\; \:\; -+} \right) \nu_R = \frac{i}{2n^\mu \tilde{D}_\mu} R_{+j} \, \gamma^j \nu_R =0~. \end{equation} The last step follows from the vacuum property of the space-time background. Finally, from \eqref{sim2:eom.a}, \eqref{sim2:eom.b} and \eqref{sim2:n.lambda2} easily follows \begin{equation} \label{sim2:propa.gen} \left( \ensuremath \:\raisebox{0.2ex}{\slash}\hspace{-0.74em} D\Dsl + m^2 \right) \nu_L = 0~. \end{equation} \section{Introduction} \label{intro} Recently, Cohen and Glashow \cite{Cohen:2006ky,Cohen:2006ir} proposed an interesting origin of neutrino mass. Breaking Lorentz symmetry to a four-parameter subgroup called $SIM(2)$, the Dirac equation for a chiral fermion may be augmented with a non-local term leading to propagation as a massive particle. In this scheme, which they called Very Special Relativity (VSR), the departure from Lorentz-invariance implies the breaking of the discrete symmetries $P$, $CP$ and $T$ (but not $CPT$) suggesting a common origin for small $CP$-violating effects and neutrino masses. As Cohen and Glashow argued, VSR is consistent with current experimental bounds on Lorentz symmetry breaking and, therefore, constitutes an interesting modification of Standard Model physics. In contrast to other approaches to Lorentz breaking, VSR is free of spurions, i.e.,\ it does not involve spontaneous symmetry breaking by non-zero expectation values of Lorentz tensors. This is a consequence of the fact that $SIM(2)$ does not possess invariant tensors (except the scalar), which implies also that a local Lagrangian, which breaks Lorentz symmetry while maintaining $SIM(2)$, cannot be constructed merely out of Lorentz tensors. Another feature of $SIM(2)$, which distinguishes it from its parent $SO(3,1)$, is that all its irreducible representations are one-dimensional, labelled by spin along a preferred axis. Hence, VSR predicts (very small) mass splittings in $SO(3,1)$ matter multiplets \cite{Fan:2006nd}. $SIM(2)$ supersymmetry has been considered in \cite{Cohen:2006sc,Lindstrom:2006xh}. An obvious question is whether and in which ways VSR can be generalized to space-times, which are not Minkowski. Gibbons, Gomis and Pope \cite{Gibbons:2007iu} searched for deformations of $ISIM(2)$, i.e.,\ $SIM(2)$ plus translations, analogous in spirit to the deformation of the Poincar\'e to the de~Sitter or anti-de~Sitter algebras. They found that there is such a deformation, but space-time would be described by a Finslerian rather than Riemannian geometry. Alvarez and Vidal \cite{Alvarez:2008uy} considered implementations of $SIM(2)$ in de~Sitter space-time, motivated by the experimental finding of a small positive cosmological constant. To do this, they wrote down a local Lagrangian with auxiliary fields, which reproduces the VSR dynamics of the neutrino in Minkowski space-time, and replaced ordinary with covariant derivatives. They did not verify, however, whether the Lagrangian thus obtained is invariant under \emph{local} $SIM(2)$ transformations, as one would expect for a gauging. In the present paper, we intend to follow up on this point. As will be discussed, the gauging of $SIM(2)$ does not lead, in general, to a consistent coupling to gravity. However, in vacuum space-times with $SIM(2)$ holonomy, matters are similar to Minkowski space-time such that chiral neutrinos do propagate as massive particles. An outline of the rest of the paper is as follows. In Sec.~\ref{sim2:flat}, the VSR dynamics of neutrinos in Minkowski space-time is reviewed, and $SIM(2)$ invariant Lagrangians leading to the VSR neutrino mass are given. In Sec.~\ref{sim2:general}, the gauging of $SIM(2)$ is discussed. Specializing to space-times with $SIM(2)$ holonomy, in where the $SIM(2)$ covariant derivate provides the complete coupling to the gravitational background, the propagation of a chiral fermion as a massive particle is derived. Finally, Sec.~\ref{sim2:conc} contains conclusions. \section*{Acknowledgments} It is a pleasure to thank L.~Cappiello for fruitful discussions. This work has been supported in part by the European Community's Human Potential Programme under contract MRTN-CT-2004-005104 'Constituents, fundamental forces and symmetries of the universe' and by the Italian Ministry of Education and Research (MIUR), project 2005-023102. \bibliographystyle{JHEP}
0806.0667
\section{Introduction} The idea of extra dimensions is by now a well-known one. It has led to new solutions to the gauge hierarchy problem without imposing supersymmetry~\cite{ADD98,RS1}, and it has opened up new avenues to attack the flavour puzzle in the Standard Model (SM). One such application is the seminal proposal of split fermions by Arkani-Hamed and Schmaltz~\cite{AS} that fermion mass hierarchy can be generated from the wave function overlap of fermions located differently in the extra dimension. The split fermion scenario had been implemented in both flat extra dimension models~\cite{AS,FSF}, and warped extra dimension Randall-Sundrum (RS) models~\cite{GN,RSF}. Subsequently, phenomenologically successful mass matrices were found in the case of one flat extra dimension without much fine tuning of the Yukawa couplings~\cite{CHN}, and in the case of warped extra dimensions, realistic fermion masses and mixing pattern can be reproduced with almost universal bulk Yukawa couplings~\cite{H03,CKY06,MSM06}. To date, many attempts in understanding the fermion flavour structure come in terms of symmetries. Fermion mass matrix ansatz with a high degree of symmetry were constructed to fit simultaneously the observed mass hierarchy and flavour mixing patterns. It is an interesting question whether in the pure geometrical setting of the RS framework where there are no flavour symmetries a priori, such symmetrical forms can arise and arise naturally without fine tuning of the Yukawa couplings, i.e. whether symmetries in the fermion mass matrices can be compatible with a natural, hierarchyless Yukawa structure in the RS framework, and to what degree. Another interesting and related question is whether or not one can experimentally discern if the fermion mass matrices are symmetric in the RS framework. In the SM, only the left-handed (LH) fermion mixings such as the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix is measurable, but not the right-handed (RH) ones. However, in the RS framework the RH fermion mixings become measurable through the effective couplings of the gauge bosons to the fermions induced from the Kaluza-Klein (KK) interactions. If the fermion mass matrices are symmetric, the LH and RH mixing matrices would be the same. Thus the most direct way of searching for the effects of these RH mixings would be through the induced RH fermion couplings in flavour changing processes that are either not present or very much suppressed in the SM. In this work we study how well the RS setting serves as a framework for flavour physics either with or without symmetries in the fermion mass matrices, and if the two scenarios can be distinguished experimentally. We concentrate on the quark (and especially the top) sector, and we study the issues involved in the RS1 model~\cite{RS1} with an $SU(2)_L \times SU(2)_R \times U(1)_X$ bulk symmetry, which we shall refer to as the minimal custodial RS (MCRS) model. The $U(1)_X$ is customarily identified with $U(1)_{B-L}$. The enlarged electroweak symmetry contains a custodial isospin symmetry which protects the SM $\rho$ parameter from receiving excessive corrections, and the model has been shown to be a complete one that can pass all electroweak precision tests (EWPT) at a scale of $\sim 3$ to 4~TeV~\cite{ADMS03}. The organization of the paper is as follows. In Sec.~\ref{Sec:RSFP} we quickly review the details of the MCRS model to fix our notations. In Sec.~\ref{Sec:RSMQ} we investigate which type of mass matrix ansatz is compatible with Yukawa couplings that are perturbative and not fine-tuned by matching the ansatz form to that in the MCRS model. Relevant matching formulae and EWPT limits on the controlling parameters are collected into the two Appendices. We also investigate possible patterns in the mass matrices by numerically scanning the EWPT allowed parameter space for those that can reproduce simultaneously the observed quark masses and the CKM mixing matrix. In Sec.~\ref{Sec:RHcurr} we study the effects of quark mass matrices being symmetrical or not are having on flavour changing top decays, $t \ra c(u) Z$, which are expected to have the clearest signal at the LHC. We summarized our findings in Sec.~\ref{Sec:Conc}. \section{\label{Sec:RSFP}Review of the MCRS model} In this section, we briefly review the set-up of the MCRS model. We summarize relevant results on the KK reduction and the interactions of the bulk gauge fields and fermions, and establish the notation to be used below. \subsection{General set-up and gauge symmetry breaking} The MCRS model is formulated in a 5-dimensional (5D) background geometry based on a slice of $AdS_5$ space of size $\pi r_c$, where $r_c$ denotes the radius of the compactified fifth dimension. Two 3-branes are located at the boundaries of the $AdS_5$ slice, which are also the orbifold fixed points. They are taken to be $\phi=0$ (UV) and $\phi=\pi$ (IR) respectively. The metric is given by \begin{equation}\label{Eq:metric} ds^2 = G_{AB}\,dx^A dx^B = e^{-2\sigma(\phi)}\,\eta_{\mu\nu}dx^{\mu}dx^{\nu}-r_c^2 d\phi^2 \,, \qquad \sigma(\phi) = k r_c |\phi| \,, \end{equation} where $\eta_{\mu\nu} = \mathrm{diag}(1,-1,-1,-1)$, $k$ is the $AdS_5$ curvature, and $-\pi\leq\phi\leq\pi$. The model has $SU(2)_L \times SU(2)_R \times U(1)_{X}$ as its bulk gauge symmetry group. The fermions reside in the bulk, while the SM Higgs, which is now a bidoublet, is localized on the IR brane to avoid fine tuning. The 5D action of the model is given by~\cite{ADMS03} \begin{equation}\label{Eq:S5D} S=\int\!d^4x\!\int_{0}^{\pi}\!d\phi\,\sqrt{G}\left[ \CL_g +\CL_f + \CL_{UV}\,\delta (\phi) + \CL_{IR}\,\delta (\phi-\pi) \right] \,, \end{equation} where $\CL_g$ and $\CL_f$ are the bulk Lagrangian for the gauge fields and fermions respectively, and $\CL_{IR}$ contains both the Yukawa and Higgs interactions. The gauge field Lagrangian is given by \begin{equation} \CL_g= -\frac{1}{4}\left( W_{AB}W^{AB} + \wtil{W}_{AB}\wtil{W}^{AB}+\wtil{B}_{AB}\wtil{B}^{AB} \right) \,, \end{equation} where $W$, $\wtil{W}$, $\tilde{B}$ are field strength tensors of $SU(2)_L$, $SU(2)_R$ and $U(1)_{X}$ respectively. On the IR brane, $SU(2)_L \times SU(2)_R$ is spontaneously broken down to $SU(2)_V$ when the SM Higgs acquires a vacuum expectation value (VEV). On the UV brane, first the custodial $SU(2)_R$ is broken down to $U(1)_R$ by orbifold boundary conditions; this involves assigning orbifold parities under $S_1/(Z_2 \times Z^{\prime}_2)$ to the $\mu$-components of the gauge fields: one assigns $(-+)$ for $\wtil{W}^{1,2}_\mu$, and $(++)$ for all other gauge fields, where the first (second) entry refers to the parity on the UV (IR) boundary. Then, $U(1)_R \times U(1)_{X}$ is further broken down to $U(1)_Y$ spontaneously (via a VEV), leaving just $SU(2)_L \times U(1)_Y$ as the unbroken symmetry group. \subsection{Bulk gauge fields} Let $A_M(x,\phi)$ be a massless 5D bulk gauge field, $M = 0,1,2,3,5$. Working in the unitary gauge where $A_5=0$, the KK decomposition of $A_\mu(x,\phi)$ is given by (see e.g.~\cite{Pom99,RSF}) \begin{equation}\label{Eq:gKKred} A_\mu(x,\phi)= \frac{1}{\sqrt{r_c\pi}}\sum_n A_\mu^{(n)}(x)\chi_n(\phi) \,, \end{equation} where $\chi_n$ are functions of the general form \begin{equation}\label{Eq:gWF} \chi_n = \frac{e^\sigma}{N_n} \big[J_1(z_n e^\sigma) + b_1(m_n)Y_1(z_n e^\sigma)\big] \,, \qquad z_n = \frac{m_n}{k} \,, \end{equation} that solve the eigenvalue equation \begin{equation}\label{Eq:gKKeq} \left(\frac{1}{r_c^2}\PD_\phi\,e^{-2\sigma}\PD_\phi-m_n^2\right)\chi_n = 0 \,, \end{equation} subject to the orthonormality condition \begin{equation} \frac{1}{\pi}\int^{\pi}_{0}\!d\phi\,\chi_n\chi_m = \delta_{mn} \,. \end{equation} Depending on the boundary condition imposed on the gauge field, the coefficient function $b_1(m_n)$ is given by \begin{align} (++)\;\;\mathrm{B.C.}:\quad b_1(m_n) &= -\frac{J_0(z_n e^{\sigma(\pi)})}{Y_0(z_n e^{\sigma(\pi)})} = -\frac{J_0(z_n)}{Y_0(z_n)} \,, \\ (-+)\;\;\mathrm{B.C.}:\quad b_1(m_n) &= -\frac{J_0(z_n e^{\sigma(\pi)})}{Y_0(z_n e^{\sigma(\pi)})} = -\frac{J_1(z_n)}{Y_1(z_n)} \,, \end{align} which in turn determine the gauge KK eigenmasses, $m_n$. For fields with the $(++)$ boundary condition, the lowest mode is a massless state $A_\mu^{(0)}$ with a flat profile \begin{equation}\label{Eq:gflat} \chi_0 = 1 \,, \end{equation} while no zero-mode exists if it is the $(-+)$ boundary condition. The SM gauge boson is identified with the zero-mode of the appropriate bulk gauge field after KK reduction. \subsection{Bulk fermions} The free 5D bulk fermion action can be written as (see e.g.~\cite{RSF,GN}) \begin{equation} S_f = \int\!d^4x\!\int^\pi_{0}\!d\phi\,\sqrt{G}\left\{ E^M_a\left[\frac{i}{2}\bar{\Psi}\gamma^a(\ovra{\PD_M}-\ovla{\PD_M})\Psi\right] +m\,\mathrm{sgn}(\phi)\bar{\Psi}\Psi\right\} \,, \end{equation} where $\gamma^a = (\gamma^\mu,i\gamma^5)$ are the 5D Dirac gamma matrices in flat space, $G$ is the metric given in Eq.~\eqref{Eq:metric}, $E^A_a$ the inverse vielbein, and $m = c\,k$ is the bulk Dirac mass parameter. There is no contribution from the spin connection because the metric is diagonal~\cite{GN}. The form of the mass term is dictated by the requirement of $Z_2$ orbifold symmetry~\cite{GN}. The KK expansion of the fermion field takes the form \begin{equation}\label{Eq:PsiKK} \Psi_{L,R}(x,\phi) = \frac{e^{3\sigma/2}}{\sqrt{r_c\pi}} \sum_{n=0}^\infty\psi^{(n)}_{L,R}(x)f^n_{L,R}(\phi) \,, \end{equation} where the subscripts $L$ and $R$ label the chirality of the fields, and $f^n_{L,R}$ form two distinct sets of complete orthonormal functions, which are found to satisfy the equations \begin{equation} \left[\frac{1}{r_c}\PD_\phi-\left(\frac{1}{2}+c\right)k\right]f^n_R = m_n\,e^\sigma f^n_L \,, \qquad \left[-\frac{1}{r_c}\PD_\phi+\left(\frac{1}{2}-c\right)k\right]f^n_L = m_n\,e^\sigma f^n_R \,, \end{equation} with the orthonormality condition given by \begin{equation}\label{Eq:fortho} \frac{1}{\pi}\int^\pi_{0}\!d\phi\,f^{n\star}_{L,R}(\phi)f^m_{L,R}(\phi) = \delta_{mn} \,. \end{equation} Of particular interest are the zero-modes which are to be identified as SM fermions: \begin{equation} f^0_{L,R}(\phi,c_{L,R}) = \sqrt{\frac{k r_c\pi(1 \mp 2c_{L,R})}{e^{k r_c\pi(1 \mp 2c_{L,R})}-1}} e^{(1/2 \mp c_{L,R})k r_c\phi} \,, \end{equation} where the upper (lower) sign applies to the LH (RH) label. Depending on the $Z_2$ parity of the fermion, one of the chiralities is projected out. It can be seen that the LH zero mode is localized towards the the UV (IR) brane if $c_L > 1/2$ ($c_L < 1/2$), while the RH zero mode is localized towards the the UV (IR) brane when $c_R < -1/2$ ($c_R > -1/2$). The higher fermion KK modes have the general form \begin{equation}\label{Eq:fWF} f^n_{L,R} = \frac{e^{\sigma}}{N_n}B_{\alpha}(z_n e^\sigma) \,, \qquad B_{\alpha}(z_n e^\sigma) = J_{\alpha}(z_n e^\sigma) + b_{\alpha}(m_n)Y_{\alpha}(z_n e^\sigma) \,, \end{equation} where $\alpha = |c \pm 1/2|$ with the LH (RH) mode takes the upper (lower) sign. Depending on the type of the boundary condition a fermion field has, the coefficient function $b_{\alpha}(m_n)$ takes the form~\cite{ADMS03} \begin{align} \label{Eq:bapp} (++)\;\;\mathrm{B.C.}:\quad b_{\alpha}(m_n) &= -\frac{J_{\alpha \mp 1}(z_n e^{\sigma(\pi)})} {Y_{\alpha \mp 1}(z_n e^{\sigma(\pi)})} = -\frac{J_{\alpha \mp 1}(z_n)}{Y_{\alpha \mp 1}(z_n)} \,, \\ \label{Eq:bamp} (-+)\;\;\mathrm{B.C.}:\quad b_{\alpha}(m_n) &= -\frac{J_{\alpha \mp 1}(z_n e^{\sigma(\pi)})} {Y_{\alpha \mp 1}(z_n e^{\sigma(\pi)})} = -\frac{J_{\alpha}(z_n)}{Y_{\alpha}(z_n)} \,, \end{align} and normalization factor can be written as~\cite{ADMS03} \begin{align} (++)\;\;\mathrm{B.C.}:\quad N_n^2 &= \frac{e^{2\sigma(\phi)}}{2k r_c\pi} B^2_{\alpha}(z_n e^{\sigma(\phi)})\Big|^{\phi=\pi}_{\phi=0} \,, \\ (-+)\;\;\mathrm{B.C.}:\quad N_n^2 &= \frac{1}{2k r_c\pi}\big[ e^{2\sigma(\pi)}B^2_{\alpha}(z_n e^{\sigma(\pi)}) -B^2_{\alpha \mp 1}(z_n)\big] \,. \end{align} The upper sign in the order of the Bessel functions above applies to the LH (RH) mode when $c_L > -1/2$ ($c_R < 1/2$), while the lower sign applies to the LH (RH) mode when $c_L < -1/2$ ($c_R > 1/2$). The spectrum of fermion KK masses is found from the coefficient function relations given by Eqs.~\eqref{Eq:bapp} and~\eqref{Eq:bamp}. Now there is an additional $SU(2)_R$ gauge symmetry over the SM in the bulk, and the fermions have to be embedded into its representations. Below we chose the simplest way of doing this, viz. the LH fermions are embedded as $SU(2)_R$ singlets, while the RH fermions are doublets~\cite{ADMS03}. Note that since the $SU(2)_R$ is broken on the UV brane by the orbifold boundary condition, one component of the doublet under it must be even under the $Z_2$ parity, and the other odd. This forces a doubling of RH doublets where the upper component, say the up-type quark, of one doublet, and the lower component of the other doublet, the down-type, are even. \subsection{Fermion interactions} In 5D, the interaction between fermions and a bulk gauge boson is given by \begin{equation} S_{f\bar{f}A} = g_5\int\!d^4x\,d\phi\,\sqrt{G}E^M_a\bar{\Psi}\gamma^a A_M\Psi +\mathrm{h.\,c.} \,, \end{equation} where $g_5$ is the 5D gauge coupling constant. After KK reduction, couplings of the KK modes in the 4D effective theory arise from the overlap of the wave functions in the bulk. In particular, the coupling of the {\it m}th and {\it n}th fermion KK modes to the {\it q}th gauge KK mode is given by \begin{equation} g^{m\,n\,q}_{f\bar{f}A} = \frac{g_4}{\pi}\int^\pi_{0}\!d\phi\,f^m_{L,R}f^n_{L,R}\chi_q \,, \qquad g_4 = \frac{g_5}{\sqrt{r_c\pi}} \,, \end{equation} where $g_4 \equiv g_{SM}$ is the 4D SM gauge coupling constant. Note that since the gauge zero-mode has a flat profile (Eq.~\eqref{Eq:gflat}), by the orthonormality condition of the fermions wave functions, Eq.~\eqref{Eq:fortho}, only fermions of the same KK level couple to the gauge zero-mode, and the 4D coupling is simply given by $g^{m\,m\,0}_{f\bar{f}A} = g_4$. With the Higgs field $\Phi$ localized on the IR brane, the Yukawa interactions are contained entirely in $\CL_{IR}$ of the 5D action~\eqref{Eq:S5D}. The relevant action on the IR brane is given by \begin{equation} S_\mathrm{Yuk} = \int\!d^4x\,d\phi\,\sqrt{G}\,\delta(\phi-\pi) \frac{\lambda_{5,ij}}{k r_c}\,\bar{\Psi}_i(x,\phi)\Psi_j(x,\phi)\Phi(x) +\mathrm{h.\,c.} \,, \end{equation} where $\lambda_{5,ij}$ are the dimensionless 5D Yukawa coupling, and $i,j$ the family indices. Rescaling the Higgs field to $H(x)= e^{-k r_c\pi}\,\Phi(x)$ so that it is canonically normalized, the effective 4D Yukawa interaction obtained after spontaneous symmetry breaking is given by \begin{equation} S_\mathrm{Yuk} = \int\!d^4x\,v_W\frac{\lambda_{5,ij}}{k r_c\pi} \sum_{m,n}\bar{\psi}_{iL}^{(m)}(x)\psi_{jR}^{(n)}(x) f^m_L(\pi,c^L_{i})f^n_R(\pi,c^R_{j}) + \mathrm{h.\,c.} \,, \end{equation} where $\langle H \rangle = v_W = 174$~GeV is the VEV acquired by the Higgs field. The zero modes give rise to the SM mass terms, and the resulting mass matrix reads \begin{equation}\label{Eq:RSM} (M^{RS}_f)_{ij} = v_W\frac{\lambda^f_{5,ij}}{k r_c\pi} f^0_{L}(\pi,c^{L}_{f_i})f^0_{R}(\pi,c^{R}_{f_j}) \equiv v_W\frac{\lambda^f_{5,ij}}{k r_c\pi}F_L(c^{L}_{f_i})F_R(c^{R}_{f_j}) \,, \quad f = u,\,d \,, \end{equation} where the label $f$ denotes up-type or down-type quark species. Note that the Yukawa couplings are in general complex, and so take the form $\lambda^f_{5,ij} \equiv \rho^f_{ij}e^{i\phi_{ij}}$, with $\rho^f_{ij},\,\phi_{ij}$ the magnitude and the phase respectively. \section{\label{Sec:RSMQ}Structure of the quark mass matrices} In this section, we investigate the possible quark flavour structure in the RS framework. One immediate requirement on the candidate structures is that the experimentally observed quark mass spectrum and mixing pattern are reproduced. Another would be that the 5D Yukawa couplings are all of the same order, in accordance with the philosophy of the RS framework that there is no intrinsic hierarchy. We also required that constraints from EWPT are satisfied. To arrive at the candidate structures, we follow two strategies. One is to start with a known SM quark mass matrix ansatz which reproduces the observed quark mass spectrum and mixing pattern. The ansatz form is then matched onto the RS mass matrix to see if the above requirements are satisfied. The other strategy is to generate RS mass matrices at random and then pick out those that satisfy the requirements above.~\footnote{This has been tried before in Ref.~\cite{H03}, but it was done for the case with $m_{KK} > 10$~TeV where there is a little hierarchy.} To solve the hierarchy problem, we take $k r_c = 11.7$ and the warped down scale to be $\tilde{k} = k e^{-k r_c\pi} = 1.65$~TeV. Since new physics first arise at the TeV scale in the RS framework, it is also where experimental data are matched to the RS model below. We will assume that the CKM matrix evolves slowly between $\mu = M_Z$ and $\mu = 1$~TeV so that the PDG values can be adopted, and we will use the running quark mass central values at $\mu = 1$~TeV from Ref.~\cite{XZZ07}. \subsection{\label{Sec:MMA}Structure from mass matrix ansatz} In trying to understand the pattern of quark flavour mixing, many ansatz for the SM quark mass matrices have been proposed over the years. There are two common types of mass matrix ansatz consistent with the current CKM data. One type is the Hermitian ansatz first proposed by Fritzsch some time ago~\cite{Fansatz}, which has been recently updated to better accommodate $|V_{cb}|$~\cite{FX03}. The other type is the symmetric ansatz proposed by Koide \textit{et. al.}~\cite{KNMKF02}, which was inspired by the nearly bimaximal mixing pattern in the lepton sector.~\footnote{In the SM, because of the freedom in choosing the RH flavour rotation, quark mass matrices can always be made Hermitian. But this need not be the case in the RS framework as we show below.} Using these ansatz as templates, we find that only the Koide-type ansatz admit hierarchy-free 5D Yukawa couplings; this property is demonstrated below. That Fritzsch-type ansatz generically lead to hierarchical Yukawa couplings is shown in Appendix~\ref{app:HermM}. The admissible ansatz we found takes the form \begin{equation}\label{Eq:MNM} M_f = P_f^\hc \hat{M}_f P_f^\hc \,, \quad f = u,\,d, \end{equation} where $P_f = \mrm{diag}\{e^{i\delta^f_1},\,e^{i\delta^f_2},\,e^{i\delta^f_3}\}$ is a diagonal pure phase matrix, and \begin{equation} \hat{M}_f = \begin{pmatrix} \xi_f & C_f & C_f \\ C_f & A_f & B_f \\ C_f & B_f & A_f \end{pmatrix} \,, \end{equation} with all entries real and $\xi_f$ much less than all other entries. When $\xi_f = 0$, the ansatz of Ref.~\cite{KNMKF02} is recovered. The real symmetric matrix $\hat{M}_f$ is diagonalized by the orthogonal matrix \begin{equation}\label{Eq:MNOQ} O_f^\mrm{T} \hat{M}_f O_f = \begin{pmatrix} \lambda^f_1 & 0 & 0 \\ 0 & \lambda^f_2 & 0 \\ 0 & 0 & \lambda^f_3 \end{pmatrix} \,, \quad O_f = \begin{pmatrix} c_f & 0 & s_f \\ -\frac{s_f}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & \frac{c_f}{\sqrt{2}} \\ -\frac{s_f}{\sqrt{2}} & \frac{1}{\sqrt{2}} & \frac{c_f}{\sqrt{2}} \end{pmatrix} \,, \end{equation} where the eigenvalues are given by \begin{align} \lambda_1^f &= \frac{1}{2}\left[ A_f+B_f+\xi_f-\sqrt{8C_f^2+(A_f+B_f-\xi_f)^2}\right] \,, \notag \\ \lambda_2^f &= A_f-B_f \,, \notag \\ \lambda_3^f &= \frac{1}{2}\left[ A_f+B_f+\xi_f+\sqrt{8C_f^2+(A_f+B_f-\xi_f)^2}\right] \,, \end{align} and the mixing angles are given by \begin{equation} c_f = \sqrt{\frac{\lambda^f_3-\xi_f}{\lambda^f_3-\lambda^f_1}} \,, \quad s_f = \sqrt{\frac{\xi_f-\lambda^f_1}{\lambda^f_3-\lambda^f_1}} \,. \end{equation} Note that the components of $\hat{M}_f$ can be expressed as \begin{align}\label{Eq:ABC2m} A_f &= \frac{1}{2}(\lambda_3^f-\lambda_2^f+\lambda_1^f-\xi_f) \,, \notag \\ B_f &= \frac{1}{2}(\lambda_3^f+\lambda_2^f+\lambda_1^f-\xi_f) \,, \notag \\ C_f &= \frac{1}{2}\sqrt{(\lambda_3^f-\xi_u)(\xi_u-\lambda_1^f)} \,. \end{align} To reproduce the observed mass spectrum $m_1^f < m_2^f < m_3^f$, the eigenvalues $\lambda_i^f$, $i = 1,\,2,\,3$, are assigned to be the appropriate quark masses. For the Koide ansatz (the $\xi_f = 0$ case), it was pointed out in Ref.~\cite{MN04} that different assignments are needed for the up and down sectors to fit $|V_{ub}|$ better. Since the ansatz, Eq.~\eqref{Eq:MNM}, is really a perturbed Koide ansatz, we follow the same assignments here: \begin{align}\label{Eq:m2ABC} \lambda^u_1 &= -m^u_1 \,, & \lambda^u_2 &= m^u_2 \,, & \lambda^u_3 &= m^u_3 \,, \notag \\ \lambda^d_1 &= -m^d_1 \,, & \lambda^d_2 &= m^d_3 \,, & \lambda^d_3 &= m^d_2 \,. \end{align} Now since $O_d^\mrm{T}\hat{M}_d\,O_d = \mrm{diag}\{-m^d_1,\,m^d_3,\,m^d_2\}$ for the down-type quarks, to put the eigenvalues into hierarchical order, the diagonalization matrix becomes $O'_d = O_d\,T_{23}$, where \begin{equation} T_{23} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} \,. \end{equation} The quark mixing matrix is then given by \begin{equation} V_\mrm{mix} = O_u^\mrm{T}P_u P_d^\hc O'_d = \begin{pmatrix} c_u c_d+\kappa s_u s_d & c_u s_d-\kappa s_u c_d & -\sigma s_u \\ -\sigma s_d & \sigma c_d & \kappa \\ s_u c_d-\kappa c_u s_d & s_u s_d+\kappa s_u s_d & -\sigma c_u \end{pmatrix} \,, \end{equation} where \begin{equation} \kappa = \frac{1}{2}(e^{i\delta_3}+e^{i\delta_2}) \,, \quad \sigma = \frac{1}{2}(e^{i\delta_3}-e^{i\delta_2}) \,, \qquad \delta_i = \delta^u_i-\delta^d_i \,, \quad i = 1,\,2,\,3 \,. \end{equation} Without loss of generality, $\delta_1$ is taken to be zero. The matrix $V_\mrm{mix}$ depends on four free parameters, $\delta_{2,3}$ and $\xi_{u,d}$. A good fit to the CKM matrix is found by demanding the following set of conditions: \begin{equation}\label{Eq:CKMcond} |\kappa| = |V_{cb}| = 0.04160 \,, \quad |\sigma|s_u = |V_{ub}| = 0.00401 \,, \quad |\sigma|s_d = |V_{cd}| = 0.22725 \,, \end{equation} and $\delta_{CP} = -(\delta_3+\delta_2)/2 = 59^\circ$. These imply \begin{equation}\label{Eq:CKMfit} \delta_2 = -2.55893 \,, \quad \delta_3 = -0.49944 \,, \quad \xi_u = 1.36226 \times 10^{-3} \,,\quad \xi_d = 6.50570 \times 10^{-5} \,, \end{equation} which in turn lead to a Jarlskog invariant of $J = 3.16415 \times 10^{-5}$ and \begin{equation} |V_\mrm{mix}| = \begin{pmatrix} 0.97380 & 0.22736 & 0.00401 \\ 0.22725 & 0.97294 & 0.04160 \\ 0.00816 & 0.04099 & 0.99913 \end{pmatrix} \,, \end{equation} both of which are in very good agreement with the globally fitted data. With $\delta_{u,d}$ determined, so are $\hat{M}_{u,d}$ also. From Eq.~\eqref{Eq:ABC2m} we have \begin{align}\label{Eq:ABCnum} A_u &= 77.32226 \,, & B_u &= 76.77526 & C_u &= 0.43733 \,, \notag \\ A_d &= 1.26269 \,, & B_d &= -1.21731 & C_d &= 7.91684 \times 10^{-3} \,. \end{align} Parameters of the RS mass matrix~\eqref{Eq:RSM} can now be solved for by matching the RS mass matrix onto the ansatz~\eqref{Eq:MNM}. Starting with $M^{RS}_u$, there are a total of 24 parameters to be determined: six fermion wave function values, $F_L(c_{Q_i})$ and $F_R(c_{U_i})$, nine Yukawa magnitudes, $\rho^u_{ij}$, and nine Yukawa phases, $\phi^u_{ij}$, where $i,j = 1,\,2,\,3$.~\footnote{We will denote using subscripts $Q$, $U$, and $D$ respectively, for the left-handed quark doublet, and the right-handed up- and down-type singlets of $SU(2)_L$.} Matching $M^{RS}_u$ to $M_u$ results in nine conditions for both magnitudes and phases. Thus all the up-type Yukawa phases are determined by the three phases $\delta^u_i$, while six magnitudes are left as free independent parameters. These we chose to be $F_L(c_{Q_3})$ and $F_R(c_{U_3})$, which are constrained by EWPT, and $\rho^u_{11}$, $\rho^u_{21}$, $\rho^u_{31}$, $\rho^u_{32}$. Next we match $M^{RS}_d$ to $M_d$. Since $F_L(c_{Q_i})$ have already been determined, there are only 21 parameters left in $M^{RS}_d$: $F_R(c_{D_i})$, $\rho^d_{ij}$, and $\phi^d_{ij}$. Again all the down-type Yukawa phases are determined by the three phases, $\delta^d_i$, leaving three free magnitudes which we chose to be $\rho^d_{31}$, $\rho^d_{32}$, and $\rho^d_{33}$. We collect all relevant results from the matching processes into Appendix~\ref{app:SymmM}. To see that the ansatz~\eqref{Eq:MNM} does not lead to a hierarchy in the Yukawa couplings, note from Eq.~\eqref{Eq:ABC2m} we have \begin{equation} A_f \sim |B_f| \sim \frac{m_3^f}{2} \,, \quad C_u\sim\frac{\sqrt{m_3^u\,m_1^u}}{2} \,, \quad C_d\sim\frac{\sqrt{m_2^d\,m_1^d}}{2} \,. \end{equation} Given this and Eq.~\eqref{Eq:CKMfit}, we see from Eqs.~\eqref{Eq:yQu} and~\eqref{Eq:yQd} that as long as \begin{equation}\label{Eq:freerho} \rho^d_{31}\sim\rho^d_{32}\sim\rho^d_{33}\sim \rho^u_{11}\sim\rho^u_{21}\sim\rho^u_{31}\sim\rho^u_{32}\sim\rho^u_{33} \,, \end{equation} all Yukawa couplings would be of the same order in magnitude. It is amusing to note that if we begin by imposing the condition that the 5D Yukawa couplings are hierarchy-free instead of first fitting the CKM data, we find \begin{equation} \xi_u \sim m_1^u \sim 10^{-3} \,, \quad \xi_d \sim \sqrt{m_2^d\,m_1^d}\sqrt{\frac{m_1^u}{m_3^u}} \sim 3 \times 10^{-5} \,, \end{equation} which give the correct order of magnitude for $\xi_{u,d}$ necessary for $V_\mrm{mix}$ to fit the experimental CKM values. From relations~\eqref{Eq:yQu}, \eqref{Eq:FLQFRU}, and~\eqref{Eq:FRD}, for mass matrices given by the ansatz~\eqref{Eq:MNM}, all localization parameters can be determined from just that of the third generation $SU(2)_L$ doublet, $c_{Q_3}$, and the Yukawa coupling magnitudes listed in Eq.~\eqref{Eq:freerho}. To satisfy the bounds from flavour-changing neutral-currents (FCNCs), LH light quarks from the first two generations should be localized towards the UV brane. As discussed in Appendix~\ref{app:SymmM}, for generic choices of Yukawa couplings this is so for the first generation LH quarks, but not for the second generation. In order to have $c_{Q_2} > 0.5$ while still satisfying constraints from Eqs.~\eqref{Eq:constr2} and~\eqref{Eq:constr3} and the EWPT constraint $c_{U_3} < 0.2$, we choose \begin{equation}\label{Eq:UVchoice} \frac{\rho^u_{31}}{\rho^u_{21}} = 0.2615 \,, \quad \rho^u_{11} = \rho^u_{31} = 0.7 \,, \quad \rho^u_{33} = 0.85 \,, \quad \rho^u_{32} = \rho^d_{31} = \rho^d_{31} = \rho^d_{33} = 1 \,. \end{equation} We also have to shorten the EWPT allowed range of $c_{Q_3}$ to $(0.3,0.4)$ so that $c_{Q_2} > 0.5$ is always satisfied. Note that relation~\eqref{Eq:FLQFRU} constrains $c_{U_2}$ to be greater than $-0.5$ if the perturbativity constraint, $\lambda_5 < 4$, is to be met. The localization parameters increase monotonically as $c_{Q_3}$ increases. Except for $c_{U_{2,3}}$, the variation of the localization parameters is small. We list below their range variation as $c_{Q_3}$ varies from 0.3 to 0.4 given the choice of the Yukawa couplings~\eqref{Eq:UVchoice}: \begin{gather} 0.65 < c_{Q_1} < 0.66 \,, \qquad 0.50 < c_{Q_2} < 0.52 \,, \notag \\ -0.62 < c_{U_1} < -0.61 \,, \qquad -0.26 < c_{U_2} < -0.01 \,, \qquad -0.16 < c_{U_3} < 0.18 \,, \notag \\ -0.75 < c_{D_1} < -0.74 \,, \qquad -0.60 < c_{D_{2,3}} < -0.59 \,. \end{gather} \subsection{\label{Sec:Rand}Structure from numerical search} The RS mass matrix given by Eq.~\eqref{Eq:RSM} has a productlike form: \begin{equation} M^{RS}\sim \begin{pmatrix} a_1 b_1 & a_1 b_2 & a_1 b_3 \\ a_2 b_1 & a_2 b_2 & a_2 b_3 \\ a_3 b_1 & a_3 b_2 & a_3 b_3 \end{pmatrix} \,, \qquad a_i = F_L(c^L_i) \,, \quad b_i = F_R(c^R_i) \,, \end{equation} and it can be brought into the diagonal form by a unitary transformation \begin{equation} (U_L^f)^\hc M^{RS}_f\,U_R^f = \begin{pmatrix} \lambda^f_1 & 0 & 0 \\ 0 & \lambda^f_2 & 0 \\ 0 & 0 & \lambda^f_3 \end{pmatrix} \,, \quad f = u,\,d \,. \end{equation} Suppose there is just one universal 5D Yukawa coupling, say $\lambda_5 = 1$, then the RS mass matrix $M^{RS}_f$ would be singular with two zero eigenvalues, and both the LH and RH quark mixing matrices would be the identity matrix, i.e. $V^{L,R}_{mix} = (U^u_{L,R})^\hc U^d_{L,R} = \mathbb{1}_{3 \times 3}$. Thus, in order to obtain realistic quark masses and CKM mixing angles ($V^L_{mix} \equiv V_{CKM}$), one cannot assume one universal Yukawa coupling. Rather, for each configuration of localization parameters, the magnitudes and phases of the 5D Yukawa coupling constants, $\rho_{ij}$ and $\phi_{ij}$, will be randomly chosen from the intervals $[1.0,3.0]$ and $[0,2\pi]$ respectively, and we take a sample size of $10^5$. The numerical search is done with $0.5 < c_{Q_{1,2}} < 1$ and $-1 < c_{U_{1,2}},\,c_{D_{1,2,3}} < -0.5$ so that the first two generation quarks, as well as the third generation RH quarks of the $D_3$ doublet are localized towards the UV brane. For the third generation, $0.25 < c_{Q_3} < 0.4$ and $-0.5 < c_{U_3}< 0.2$ are required so the EWPT constraints are satisfied (see Appendix~\ref{app:SymmM}). We averaged the quark masses and CKM mixing angles over the entire sample for each configuration of localization parameters, and these choices yielded averaged values that are within one statistical deviation of the experimental values at $\mu = 1$~TeV as given in Ref.~\cite{XZZ07}. Below we give three representative configurations from the the admissible configurations found after an extensive search. \begin{itemize} \item Configuration~I: \begin{align} c_Q &= \{0.634,0.556,0.256\} \,, \notag \\ c_U &= \{-0.664,-0.536,0.185\} \,, \notag \\ c_D &= \{-0.641,-0.572,-0.616\} \,. \end{align} \end{itemize} In units of GeV, the mass matrices averaged over the whole sample are given by \begin{equation} \langle|M_u|\rangle = \begin{pmatrix} 8.97\times 10^{-4} & 0.049 & 0.767 \\ 0.010 & 0.554 & 8.69 \\ 0.166 & 9.06 & 142.19 \end{pmatrix} \,, \quad \langle|M_d|\rangle = \begin{pmatrix} 0.0019 & 0.017 & 0.0044 \\ 0.022 & 0.196 & 0.050 \\ 0.352 & 3.209 & 0.813 \end{pmatrix} \,, \end{equation} which have eigenvalues \begin{align} m_t &= 109(52) \,, & m_c &= 0.56(59) \,, & m_u &= 0.0011(12) \,, \notag \\ m_b &= 2.59 \pm 1.11 \,, & m_s &= 0.048(32) \,, & m_d &= 0.0017(12) \,. \end{align} The resulting mixing matrices are given by \begin{align} |V^{L}_{us}| &= 0.16(14) \,, & |V^{L}_{ub}| &= 0.009(11) \,, & |V^{L}_{cb}| &= 0.079(74) \,, \notag \\ |V^{R}_{us}| &= 0.42(24) \,, & |V^{R}_{ub}| &= 0.12(10) \,, & |V^{R}_{cb}| &= 0.89(13) \,, \end{align} which give rise to an averaged Jarlskog invariant consistent with zero with a standard error of $1.3 \times 10^{-4}$. \begin{itemize} \item Configuration~II: \begin{align} c_Q &= \{0.629,0.546,0.285\} \,, \notag \\ c_U &= \{-0.662,-0.550,0.080\} \,, \notag \\ c_D &= \{-0.580,-0.629,-0.627\} \,. \end{align} \end{itemize} In units of GeV, the mass matrices averaged over the entire sample are given by \begin{equation} \langle|M_u|\rangle = \begin{pmatrix} 0.0011 & 0.039 & 0.834 \\ 0.014 & 0.492 & 10.55 \\ 0.16 & 5.726 & 122.87 \end{pmatrix} \,, \quad \langle|M_d|\rangle = \begin{pmatrix} 0.017 & 0.0034 & 0.0036 \\ 0.209 & 0.043 & 0.046 \\ 2.43 & 0.506 & 0.539 \end{pmatrix} \,, \end{equation} which have eigenvalues \begin{align} m_t &= 95(45) \,, & m_c &= 0.49(50) \,, & m_u &= 0.0014(16) \,, \notag \\ m_b &= 2.01(83) \,, & m_s &= 0.057(35) \,, & m_d &= 0.0022(15) \,. \end{align} The resulting mixing matrices are given by \begin{align} |V^{L}_{us}| &= 0.14(12) \,, & |V^{L}_{ub}| &= 0.011(13) \,, & |V^{L}_{cb}| &= 0.11(10) \,, \notag \\ |V^{R}_{us}| &= 0.30(20) \,, & |V^{R}_{ub}| &= 0.90(12) \,, & |V^{R}_{cb}| &= 0.23(15) \,, \end{align} which give rise to an averaged Jarlskog invariant consistent with zero with a standard error of $2.3 \times 10^{-4}$. \begin{itemize} \item Configuration~III: \begin{align} c_Q &= \{0.627,0.571, 0.272\} \,, \notag \\ c_U &= \{-0.518,-0.664,0.180\} \,, \notag \\ c_D &= \{-0.576,-0.610,-0.638\} \,, \end{align} \end{itemize} In units of GeV, the mass matrices averaged over the entire sample are given by \begin{equation} \langle|M^u|\rangle = \begin{pmatrix} 0.092 & 0.0010 & 0.940 \\ 0.554 & 0.0065 & 5.66 \\ 13.4 & 0.158 & 136.9 \end{pmatrix} \,, \quad \langle|M^d|\rangle = \begin{pmatrix} 0.019 & 0.0066 & 0.0026 \\ 0.114 & 0.039 & 0.016 \\ 2.774 & 0.955 & 0.376 \end{pmatrix} \,, \end{equation} which have eigenvalues \begin{align} m_t &= 106(50) \,, & m_c &= 0.56(55) \,, & m_u &= 0.0013(12) \,, \notag \\ m_b &= 2.32(94) \,, & m_s &= 0.036(21) \,, & m_d &= 0.0023(16) \,. \end{align} The resulting mixing matrices are given by \begin{align} |V^{L}_{us}| &= 0.27(19) \,, & |V^{L}_{ub}| &= 0.010(10) \,, & |V^{L}_{cb}| &= 0.048(44) \,, \notag \\ |V^{R}_{us}| &= 0.77(19) \,, & |V^{R}_{ub}| &= 0.36(21) \,, & |V^{R}_{cb}| &= 0.85(15) \,, \end{align} which give rise to an averaged Jarlskog invariant consistent with zero with a standard error of $1.9 \times 10^{-4}$. In summary, from the numerical study we found that in the RS framework, there is neither a preferred form for the mass matrix nor a universal RH mixing pattern. Note that the RH mixing matrix is in general quite different from its LH counterpart, viz. the CKM matrix. \section{\label{Sec:RHcurr}Flavour violating top quark decays} In this section we study the consequences the different forms of quark mass matrices have on FCNC processes. We focus below on the decay, $t \ra c\,(u)\,Z \ra c\,(u)\,l\bar{l}$, where $l=e,\mu,\tau,\nu$. Modes which decay into a real $Z$ and $c\,(u)$-jets are expected to have a much higher rate than those involving a photon or a light Higgs, which happen through loop effects. Moreover, much cleaner signatures at the LHC can be provided by leptonic $Z$-decays. \subsection{\label{Sec:treeFC}Tree-level flavour violations in MCRS} Tree-level FCNCs are generic in extra-dimensional models, for both a flat background geometry~\cite{FCNC1} and a warped one~\cite{RSFCNC,H03,APS05}. Because of the KK interactions, the couplings of the $Z$ to the fermions are shifted from their SM values. These shifts are not universal in general, and so flavour violations necessarily result when the fermions are rotated from the weak to the mass eigenbasis. More concretely, consider the $Z f\bar{f}$ coupling in the weak eigenbasis: \begin{gather}\label{Eq:Zqq} \mathcal{L}_\mathrm{NC}\supset g_Z Z_\mu\left\{ Q_Z(f_L)\sum_{i,j}(\delta_{ij}+\kappa_{ij}^L)\bar{f}_{iL}\gamma^\mu f_{jL}+ Q_Z(f_R)\sum_{i,j}(\delta_{ij}+\kappa_{ij}^R)\bar{f}_{iR}\gamma^\mu f_{jR} \right\} \,, \end{gather} where $i$, $j$ are family indices, $\kappa_{ij}=\mathrm{diag}(\kappa_1,\kappa_2,\kappa_3)$, and \begin{equation} Q_Z(f) = T^3_L(f)-s^2 Q_f \,, \qquad Q_f = T^3_L(f) + T^3_R(f) + Q_{X}(f) = T^3_L(f) + \frac{Y_f}{2} \,, \end{equation} with $Q_f$ is the electric charge of the fermion, $Y_f/2$ the hypercharge, $T_{L,R}(f)$ the weak isospin under $SU(2)_{L,R}$, and $Q_X(f)$ the charge under $U(1)_X$. We define $\kappa_{ij}\equiv\delta g^{L,R}_{i,j}/g_Z$ to be the shift in the weak eigenbasis $Z$ couplings to fermions relative to its SM value given by $g_Z\equiv e/(sc)$, as well as the usual quantities \begin{equation} e = \frac{g_L\,g'}{\sqrt{g_L^2+g'\,^2}} \,, \qquad g' = \frac{g_R\,g_{X}}{g_R^2+g_{X}^2} \,, \qquad s = \frac{e}{g_L} \,, \qquad c = \sqrt{1-s^2} \,, \end{equation} where $g_L = g_{5L}/\sqrt{r_c\pi}$ is the 4D gauge coupling constant of $SU(2)_L$ (and similarly for the rest). Rotating to the mass eigenbasis of the SM quarks defined by $f' = U^\dag f$, where the unitary matrix $U$ diagonalizes the SM quark mass matrix, flavour off-diagonal terms appear: \begin{equation} \mathcal{L}_\mathrm{FCNC}\supset g_Z Z_\mu\left\{ Q_Z(f_L)\sum_{a,b}\hat{\kappa}_{ab}^L\,\bar{f}'_{aL}\gamma^\mu f'_{bL}+ Q_Z(f_R)\sum_{a,b}\hat{\kappa}_{ab}^R\,\bar{f}'_{aR}\gamma^\mu f'_{bR} \right\} \,, \end{equation} where the mass eigenbasis flavour off-diagonal couplings are given by \begin{equation}\label{Eq:kFCNC} \hat{\kappa}_{ab}^{L,R} = \sum_{i,j}(U^\dag_{L,R})_{ai}\kappa_{ij}^{L,R}(U_{L,R})_{jb} \,. \end{equation} Note that the off-diagonal terms would vanish only if $\kappa$ is proportional to the identity matrix. In the RS framework, one leading source of corrections to the SM neutral current interaction comes from the exchanges of heavy KK neutral gauge bosons as depicted in Fig.~\ref{Fig:ZKK}. \begin{figure}[htbp] \centering \includegraphics[width=1.5in]{gKKpic.eps} \caption{\label{Fig:ZKK} Correction to the $Z f\bar{f}$ coupling due to the exchange of gauge KK modes. The fermions are in the weak eigenbasis, and $X = Z,\,Z'$.} \end{figure} The effect of gauge KK exchanges give rise only to the diagonal terms of $\kappa$. It can be efficiently calculated with the help of the massive gauge 5D mixed position-momentum space propagators, which automatically sums up contributions from all the KK modes~\cite{ADMS03,CDPTW03}. The leading contributions can be computed in terms of the overlap integral, \begin{equation} G_f^{L,R}(c_{L,R}) = \frac{v_W^2}{2}\,r_c\!\int_0^{\pi}\!d\phi |f^0_{L,R}(\phi,c_{L,R})|^2\tilde{G}_{p=0}(\phi,\pi) \,, \end{equation} where $\tilde{G}_{p=0}$ is the zero-mode subtracted gauge propagator evaluated at zero 4D momentum. For KK modes obeying the $(++)$ boundary condition, $\tilde{G}_{p=0}$ is given by~\cite{CDPTW03} \begin{align} \tilde{G}^{(++)}_{p=0}(\phi,\phi') = \frac{1}{4k(k r_c\pi)}\bigg\{ \frac{1-e^{2k r_c\pi}}{k r_c\pi}+e^{2k r_c\phi_<}(1-2k r_c\phi_<) +e^{2k r_c\phi_>}\Big[1+2k r_c(\pi-\phi_>)\Big]\bigg\} \,, \end{align} and those obeying the $(-+)$ boundary condition \begin{equation} \tilde{G}^{(-+)}_{p=0}(\phi,\phi') = -\frac{1}{2k}\left(e^{2k r_c\phi_<}-1\right) \,, \end{equation} where $\phi_<$ ($\phi_>$) is the minimum (maximum) of $\phi$ and $\phi'$. The gauge KK correction to the $Z$ coupling is thus $\kappa^g_{ij} = \kappa^g_{q_i}\delta_{ij}$, with $\kappa^g_{q_i}$ given by~\cite{CPSW06} \begin{equation}\label{Eq:dgig} (\kappa^g_{q_i})_{L,R} = \frac{e^2}{s^2 c^2}\left\{ G^{q^i_{L,R}}_{++}-\frac{G^{q^i_{L,R}}_{-+}}{Q_Z(q^i_{L,R})}\left[ \frac{g_R^2}{g_L^2}c^2 T_R^3(q^i_{L,R})-s^2\frac{Y_{q^i_{L,R}}}{2}\right] \right\} \,, \end{equation} where the label $q$ denotes the fermion species. Note that when the fermions are localized towards the UV brane ($c_L \gtrsim 0.6$ and $c_R \lesssim -0.6$), $G_{-+}$ is negligible, while $G_{++}$ becomes essentially flavour independent~\cite{CPSW06}. Another source of corrections to the $Z f\bar{f}$ coupling arises from the mixings between the fermion zero modes and the fermion KK modes brought about by the the Yukawa interactions. These generate diagonal as well as off-diagonal terms in $\kappa$. The diagram involved is depicted in Fig.~\ref{Fig:fKK}. \begin{figure}[htbp] \centering \includegraphics[width=2.2in]{fKKpic2.eps} \caption{\label{Fig:fKK} Correction to the $Z f\bar{f}$ coupling due to SM fermions mixing with the KK modes. The fermions are in the weak eigenbasis. } \end{figure} The effects of the fermion mixings may be similarly calculated by using the fermion analogue of the gauge propagators. It is however much more convenient to deal directly with the KK modes here. The KK fermion corrections to the weak eigenbasis $Z$ couplings can be written as \begin{equation}\label{Eq:dgfg} (\kappa^f_{ij})_L = \sum_\alpha\sum_{n=1}^\infty \frac{m_{i\alpha}^\ast m_{j\alpha}}{(m^\alpha_n)^2}\mathfrak{F}^\alpha_R \,, \qquad (\kappa^f_{ij})_R = \sum_\alpha\sum_{n=1}^\infty \frac{m_{\alpha i}m_{\alpha j}^\ast}{(m^\alpha_n)^2}\mathfrak{F}^\alpha_L \,, \end{equation} where $m_n$ is the $n$th level KK fermion mass, $m_{i\alpha}$ are entries of the weak eigenbasis RS mass matrix~\eqref{Eq:RSM} with $\alpha$ a generation index~\footnote{For shift in the LH couplings, the index $\alpha$ runs over the generations of both types of $SU(2)_R$ doublets, $U$ and $D$, both of which contain KK modes that can mix with LH zero modes. For shift in the RH couplings, $\alpha$ runs over just the generations of the only type of $SU(2)_L$ doublets, $Q$.}, and \begin{equation} \mathfrak{F}^\alpha_{R,L} = \bigg|\frac{f^n_{R,L}(\pi,c_\alpha^{R,L})} {f^0_{R,L}(\pi,c_\alpha^{R,L})}\bigg|^2 \frac{Q_Z(f_{R,L})}{Q_Z(f_{L,R})} \,, \end{equation} with the argument of $Q_Z$, $f = u,\,d$, denoting up-type or down-type quark species. Note that for $c_\alpha^L < 1/2$ and $c_\alpha^R > -1/2$, $|f^n_{L,R}(\pi,c_\alpha^{L,R})|\approx\sqrt{2k r_c\pi}$. To determine $\hat{\kappa}_{ab}$ in Eq.~\eqref{Eq:kFCNC}, one needs to know the rotation matrices $U_L$ and $U_R$. In the case where the weak eigenbasis mass matrices are given by the symmetric ansatz~\eqref{Eq:MNM}, the analytical form of the rotation matrices are known. By rephasing the quark fields so that $\delta^u_i = 0$ and all the Yukawa phases reside in down sector, the up-type rotation matrix is just the orthogonal diagonalization matrix given by Eq.~\eqref{Eq:MNOQ}. Using the solution of the CKM fit given in Eq.~\eqref{Eq:CKMfit}, we have \begin{equation} U^u_L = U^u_R = U^u \,, \qquad U^u = O_u = \begin{pmatrix} 0.99999 & 0 & 0.00401 \\ -0.00284 & -\frac{1}{\sqrt{2}} & 0.70710 \\ -0.00284 & \frac{1}{\sqrt{2}} & 0.70710 \end{pmatrix} \,. \end{equation} Since we are interested in flavour violating top decays, the relevant mass eigenbasis off-diagonal corrections are $\hat{\kappa}_{3r} = \hat{\kappa}^g_{3r} + \hat{\kappa}^f_{3r}$, $r = 1,\,2$. For the discussion below, using relations~\eqref{Eq:yQu}, \eqref{Eq:FLQFRU}, and~\eqref{Eq:FRD} we will trade the dependences of $\hat{\kappa}^{L,R}_{ab}$ on all the different localization parameters for just a single dependence on $c_{Q_3}$, and the Yukawa coupling magnitudes which we fix to take the values given in Eq.~\eqref{Eq:UVchoice}. Recall that with this choice of the Yukawa coupling magnitudes, the EWPT allowed range for $c_{Q_3}$ is between 0.3 and 0.4. Since $\kappa^g_{ij} = \kappa^g_{q_i}\delta_{ij}$, the gauge KK contributions is simply $\hat{\kappa}^g_{3r} = \sum_i\kappa^g_{q_i}(U^u)^\dag_{3i}U^u_{ir}$, with \begin{equation}\label{Eq:hatkg} \hat{\kappa}^g_{tu} = 2.00672 \times 10^{-3}\,(2\kappa^g_{u}-\kappa^g_{c}-\kappa^g_{t}) \,, \qquad \hat{\kappa}^g_{tc} = 0.50\,(\kappa^g_{t}-\kappa^g_{c}) \,. \end{equation} We plot $\hat{\kappa}^g_{3r}$ as a function of $c_{Q_3}$ in Fig.~\ref{Fig:hkgtuc}. \begin{figure}[htbp] \centering \subfigure[]{ \label{Fig:subfig:gKKtu} \includegraphics[width=2.8in]{gtuabsed.eps}} \hspace{0.2in} \subfigure[]{ \label{Fig:subfig:gKKtc} \includegraphics[width=2.8in]{gtcabsed.eps}} \caption{\label{Fig:hkgtuc} Gauge KK contribution in the case of symmetrical mass matrices to (a) $\hat{\kappa}_{tu}$ and (b) $\hat{\kappa}_{tc}$. The labels LH and RH indicate whether it is for the LH or RH coupling.} \end{figure} For the fermion KK contributions, since the decoupling of the higher KK modes is very efficient, hence just the first KK mode provides a very good approximation to the full tower. We plot using this approximation $|\hat{\kappa}^{f}_{3r}|$ as functions of $c_{Q_3}$ in Fig.~\ref{Fig:hkftuc}. \begin{figure}[htbp] \centering \subfigure[]{ \label{Fig:subfig:fKKtu} \includegraphics[width=2.8in]{ftued.eps}} \hspace{0.2in} \subfigure[]{ \label{Fig:subfig:fKKtc} \includegraphics[width=2.8in]{ftced.eps}} \caption{\label{Fig:hkftuc} Fermion KK contribution in the case of symmetrical mass matrices to (a) $\hat{\kappa}_{tu}$ and (b) $\hat{\kappa}_{tc}$. The labels LH and RH indicate whether it is for the LH or RH coupling. The plots are made using the first KK mode to approximate the full KK tower.} \end{figure} \subsection{\label{Sec:tcZ} Experimental signatures at the LHC} The branching ratio of the decay $t \ra c(u) Z$ is given by \begin{align}\label{Eq:BrtcZ} \mathrm{Br}(t \ra c(u) Z) &= \frac{2}{c^2} \Big(|Q_Z(t_L)\,\hat{\kappa}^L_{tc(u)}|^2 + |Q_Z(t_R)\,\hat{\kappa}^R_{tc(u)}|^2\Big) \left(\frac{1-x_t}{1-y_t}\right)^2 \left(\frac{1+2x_t}{1+2y_t}\right)\frac{y_t}{x_t} \,, \end{align} where $x_t= m_Z^2/m_t^2$ and $y_t=m_W^2/m_t^2$. In Fig.~\ref{Fig:BrsymM} we plot the branching ratio as a function of $c_{Q_3}$ in the case where the weak eigenbasis mass matrix has the symmetric ansatz form of~\eqref{Eq:MNM}. \begin{figure}[htbp] \centering \subfigure[]{ \label{Fig:subfig:tuZ} \includegraphics[width=2.8in]{BrtuZed.eps}} \hspace{0.2in} \subfigure[]{ \label{Fig:subfig:tcZ} \includegraphics[width=2.8in]{BrtcZed.eps}} \caption{\label{Fig:BrsymM} Branching ratio in the case of symmetrical mass matrices as a function of $c_{Q_3}$ for the decay (a) $t \ra u\,Z$ and (b) $t \ra c\,Z$. The labels LH and RH indicate LH or RH top decay.} \end{figure} It is clear that the dominant channel is $t \ra c\,Z$. The branching ratio is at the level of a few $10^{-6}$, which is to be compared to the SM prediction of $\mathcal{O}(10^{-13})$~\cite{SMt}. As $c_{Q_3}$ increases, the decay changes from being mostly coming from the LH tops at the low end of the allowed range of $c_{Q_3}$, to having comparable contributions from both quark helicities at the high end. Note that one can in principle differentiate whether the quark rotation is LH or RH by studying the polarized top decays. For the case of asymmetrical quark mass matrix configurations found in Sec.~\ref{Sec:Rand}, the resultant branching ratios and the associated gauge and fermion KK flavour off-diagonal contributions are tabulated in Table~\ref{Tb:ASBrtc}. We give results only for the decay into charm quarks since this channel dominates over that into the up-quarks. The magnitude of our branching ratios for both cases of symmetrical and asymmetrical quark mass matrices are consistent with previous estimate in the RS framework~\cite{APS07}. \begin{table}[htbp] \caption{\label{Tb:ASBrtc} Branching ratios of $t \ra c\,Z$ and the associated gauge and fermion KK flavour off-diagonal contributions for the case of asymmetrical mass matrices found from numerical searches.} \begin{ruledtabular} \begin{tabular}{ccccccc} Config. & $|\hat{\kappa}^g_L|$ & $|\hat{\kappa}^g_R|$ & $|\hat{\kappa}^f_L|$ & $|\hat{\kappa}^f_R|$ & Br($t_L$) & Br($t_R$) \\ \hline I & $3.5 \times 10^{-4}$ & $7.7 \times 10^{-3}$ & $8.2 \times 10^{-3}$ & $4.7 \times 10^{-3}$ & $1.4 \times 10^{-5}$ & $4.1 \times 10^{-6}$ \\ II & $4.3 \times10^{-4}$ & $5.8 \times 10^{-3}$ & $9.9 \times 10^{-3}$ & $2.9 \times 10^{-3}$ & $2.1 \times 10^{-5}$ & $2.0 \times10^{-6}$ \\ III & $2.1 \times10^{-4}$ & $3.8 \times 10^{-3}$ & $5.0 \times 10^{-3}$ & $7.0 \times 10^{-3}$ & $5.4 \times 10^{-6}$ & $3.2 \times 10^{-6}$ \\ \end{tabular} \end{ruledtabular} \end{table} It is interesting to note from Fig.~\ref{Fig:BrsymM}(b) and Table~\ref{Tb:ASBrtc} that in $t \ra c\,Z$ decays, the LH decays dominate over the RH ones in the case of both symmetrical and asymmetrical quark mass matrices. The reason for this is however different for the two cases. In the symmetric case, $M_u = M_u^\dag$ and so $U^u_L = U^u_R = U^u$. Thus the difference between the LH and RH decays is due to the differences in the weak eigenbasis couplings, as can be seen from Eq.~\eqref{Eq:hatkg}, and $Q_Z$. By comparing Fig.~\ref{Fig:hkftuc}(b) to~\ref{Fig:hkgtuc}(b) we see $|\hat{\kappa}_{tc}|\sim|\hat{\kappa}^g_{tc}|$, and from Fig.~\ref{Fig:hkgtuc}(b) we have $0.9 \lesssim |(\hat{\kappa}^g_{tc})_R|/|(\hat{\kappa}^g_{tc})_L| \lesssim 2$~\footnote{It may seem counterintuitive that $|(\hat{\kappa}^g_{tc})_R|$ can be smaller than $|(\hat{\kappa}^g_{tc})_L|$ (for $c_{Q_3} < 0.32$), as one may expect that the couplings to be dominated by the top contribution, and the coupling to the RH top to be larger than that to the LH top due to the fact that the RH top is localized closer to the IR brane. However, such expectations can be misleading. Because of the mixing matrices, the mass eigenbasis coupling, $\hat{\kappa}^g_{tc}$, is not just a simple sum of the weak eigenbasis couplings, $\kappa^g_{q_i}$, but involves their differences as already mentioned. Moreover, although the greatest contribution comes from the top, the contribution from the second generation may not be completely negligible, as is the case here for $(\kappa^g_c)_R$ for the particular symmetric ansatz that we study.}. However, as $|Q_Z(t_L)|\gtrsim|2Q_Z(t_R)|$, the net effect is that the LH decay dominates (see Eq.~\eqref{Eq:BrtcZ}). In the asymmetrical case, $M_u \neq M_u^\dag$ and $U^u_L \neq U^u_R$ with no pattern relating the LH to the RH mixings. In each of the configurations of localization parameters listed in Table~\ref{Tb:ASBrtc}, while $|(\hat{\kappa}^g_{tc})_L| \ll |(\hat{\kappa}^g_{tc})_R|$, it turns out that not only $|(\hat{\kappa}^g_{tc})_R|\sim|(\hat{\kappa}^f_{tc})_R|$ and $|(\hat{\kappa}^f_{tc})_L|\sim|(\hat{\kappa}^f_{tc})_R|$, there is also a relative minus sign between the gauge and the fermion KK contributions, which results in a destructive interference that leads to a greater branching ratio for the LH decay. This is to be contrasted with Ref.~\cite{APS07} where it is the RH mode that is found to dominate. There it appears that the possibility of having a cancellation between the gauge and fermion KK contributions was not considered. We note and emphasize here the crucial role the quark mass and mixing matrices play in determining the mass eigenbasis flavour off-diagonal couplings $\hat{\kappa}_{ab}$. Most importantly, $\hat{\kappa}_{ab}$ do not depend on the fermion localizations alone. Whether or not there is a cancellation between the gauge and fermion KK contributions depends very much on the combination of the particular quark mass and mixing matrices considered just as well as the configuration of fermion localizations used. Such cancellation is by no mean generic, and has to be checked whenever a new combination of admissible configuration of fermion localizations, and quark mass and mixing matrices arise. In addition, since 5D gauge and Yukawa couplings are independent parameters, whether or not $|(\hat{\kappa}^g_{ab})_L| \ll |(\hat{\kappa}^g_{ab})_R|$ does not mean the same has to hold between $|(\hat{\kappa}^f_{ab})_L|$ and $|(\hat{\kappa}^f_{ab})_R|$. Since $\kappa^g_{ij}$ and $\kappa^f_{ij}$ have very different structures (see Eqs.~\eqref{Eq:dgig} and~\eqref{Eq:dgfg}), the combined effect when convolved with the particular quark mixing matrices can be quite different, as is the case for the three asymmetrical configurations listed in Table~\ref{Tb:ASBrtc}. It is expected that both the single top and the $\bar{t}t$ pair production rates will be high at the LHC, with the latter about a factor of two higher still than the former. To a small correction, the single tops are always produced in the LH helicity, while both helicities are produced in pair productions. Thus a simple way of testing the above at the LHC is to compare the decay rates of $t \ra Z$ + jets in single top production events (e.g. in the associated $t\,W$ productions) to that from the pair productions, so that informations of both LH and RH decays can be extracted. Note that both the single and pair production channels should give comparable branching ratios initially at the discovery stage. Of course, a higher branching ratio would be obtained from pair productions after several years of measurements. \section{\label{Sec:Conc} Summary} We have performed a detailed study of the admissible forms of quark mass matrices in the MCRS model which reproduce the experimentally well-determined quark mass hierarchy and CKM mixing matrix, assuming a perturbative and hierarchyless Yukawa structure that is not fine-tuned. We arrived at the admissible forms in two different ways. In one we examined several quark mass matrix ansatz which are constructed to fit the quark masses and the CKM matrix. These ansatz have a high degree of symmetries built in which allows the localization of the quarks (that give rise to the mass hierarchy in the RS setting) to be analytically determined. We found that the Koide-type symmetrical ansatz is compatible with the assumption of a hierarchyless Yukawa structure in the MCRS model, but not the Fritzsch-type hermitian ansatz. Because the ansatzed mass matrices are symmetrical, both LH and RH quark mixing matrices are the same. In the other way, no \textit{a priori} quark mass structures were assumed. A numerical multiparameter search for configurations of quark localization parameters and Yukawa couplings that give admissible quark mass matrices was performed. Admissible configurations were found after an extensive search. No discernible symmetries or pattern were found in the quark mass matrices for both the up-type and down-type quarks. The LH and RH mixing matrices are found to be different as is expected given the asymmetrical form of the mass matrices. We studied the possibility of differentiating between the case of symmetrical and asymmetrical quark mass matrices from flavour changing top decays, $t \ra Z$ + jets. We found the dominant mode of decay is that with a final state charm jet. The total branching ratio is calculated to be $\sim 3$ to $5 \times 10^{-6}$ in the symmetrical case and $\sim 9 \times 10^{-6}$ to $2 \times 10^{-5}$ in the asymmetrical case. The signal is within reach of the LHC which has been estimated to be $6.5\times 10^{-5}$ for a $5\sigma$ signal at $100\,\mathrm{fb}^{-1}$~\cite{Atlas}. However, the difference between the two cases may be difficult to discern. We have also investigated the decay $t_R\ra b_R\,W$ as a large number of top quarks are expected to be produced at the LHC. We found a branching ratio at the level of $\mathcal{O}(10^{-5})$ is possible. Although the signal is not negligible, given the huge SM background, its detection is still a very challenging task, and a careful feasibility study is needed. This is beyond the scope of the present paper. \section{acknowledgements} W.F.C. is grateful to the TRIUMF Theory group for their hospitality when part of this work was completed. The research of J.N.N. and J.M.S.W. is partially supported by the Natural Science and Engineering Council of Canada. The work of W.F.C. is supported by the Taiwan NSC under Grant No. 96-2112-M-007-020-MY3. {\em Note added}: After the completion of this work, we became aware of Ref.~\cite{CFW08} which finds that flavour bounds from the $\Delta F = 2$ processes in the meson sector, in particular that from $\epsilon_K$, might require the KK mass scale to be generically $\mathcal{O}(10)$~TeV in the MCRS model. We will show in an ensuing publication~\cite{future} that parameter space generically exists where KK mass scale of a few TeV is still consistent with all the flavour constraints from meson mixings, and that our conclusions with regard to the top decay in this work continue to hold.
0806.1222
\section{Introduction} \label{sec_intro} A significant step towards understanding how galaxies form and evolve can be made by measuring the variation in their star formation rate (SFR) with age. Imprinted in every galaxy's integrated light is a record of its entire life from birth, through passive evolution, possible merging and recycling of material, up to the epoch at which it is observed. Star formation histories (SFHs) therefore play a crucial role in the quest for a complete and accurate model of the formation of stellar mass in the Universe and how distant systems relate to those locally. Characterising galaxy SFHs has been a subject of much interest for several decades, with studies attempting to achieve this aim through a variety of different means. Approaches can be broadly divided into those using multi-band photometry and those using spectra. Recently the practice has seen a significant revival thanks to improvements in stellar synthesis modelling and the advent of large datasets such as the Sloan Digital Sky Survey \citep[SDSS;][]{stoughton02}. Many new spectroscopic techniques have been developed \citep[e.g.,][]{heavens00, vergely02,cid04,cid05,nolan06,ocvirk06,chil07,tojeiro07} and in their various forms, these have seen application to several sets of real data \citep[e.g.,][] {reichardt01,panter03,heavens04,panter04,sheth06, cid07,nolan07,panter07,koleva08}. Similarly, there have been numerous recent studies conducted using multi-band photometry \citep[e.g.,][]{borch06,schawinski07,salim07,noekse07,kaviraj07} including \citet{kauffmann03} who combined multi-band photometry with measurements of the H$\delta$ absorption line and 4000\AA break strength. In a similar vein to spectroscopic versus photometric redshift estimation, SFHs determined from spectra tend to have greater precision per galaxy, whereas those derived from multi-band photometry allow many more objects to be studied in the same amount of observing time but with a compromise in SFH resolution. The method adopted by existing multi-band studies is to assume a parametric model for the SFH. The parameters are adjusted to find the set of model fluxes, computed from a spectral library of choice, that best matches the set of observed fluxes. This not only forces the SFH to adhere to a potentially unrepresentative prescribed form, it also necessitates a fully non-linear minimisation over all parameters. In contrast, the majority of the recent spectroscopic methods divide up a galaxy's history into several independent time intervals and reconstruct the average SFR in each interval to give a discretised SFH. The advantage this brings, as shown in Section \ref{sec_most_prob_SFH}, is that finding the best-fit SFR in every interval for a fixed set of galaxy parameters (such as redshift, extinction and metallicity) is a linear problem. The inefficient non-linear SFH minimisation with its risk of becoming trapped in local minima is therefore replaced with a simple matrix inversion guaranteeing that the global minimum for the fixed set of galaxy parameters is found. The prescribed SFH models used by the multi-band methods are mainly driven by the small number of passbands used in many multi-band campaigns. With only a small number of passbands, the ability to constrain a galaxy's SFH is limited and a model SFH with only one or two parameters must be used. However, modern surveys are being carried out in many more passbands and over larger wavelength ranges than ever before \citep[for example, the COMBO-17 survey of][]{wolf01}. Given these recent improvements, the possibility of recovering discretised SFHs from multi-band photometry alone is now worthy of investigation. The purpose of this paper is twofold. Firstly, a new SFH reconstruction method that recovers discretised SFHs is presented. It is shown how the Bayesian evidence can be used to simultaneously establish the most appropriate number of discrete SFH time intervals and the optimal strength with which the solution should be regularised. The formalism is completely general and can be applied to spectra just as easily as multi-band photometry as well as a combination of both. The Bayesian evidence gives a more natural and simplified alternative to existing procedures for determining the optimal number of SFH intervals and for determining the correct level of regularisation. Secondly, this paper presents results of an investigation into the feasibility of using the new method with multi-band photometry alone. By applying the method to synthetic galaxy catalogues created with different input SFHs and filtersets, the accuracy of the recovered discretised SFHs is demonstrated. This study focuses in particular on the dependence of the reconstruction on galaxy redshift, photometric signal-to-noise (S/N), the wavelength range spanned by the passbands, the number of passbands and the presence/absence of a new and/or old stellar population. The layout of the paper is as follows. In Section \ref{sec_method} the SFH reconstruction method is described. Section \ref{sec_synthetic_cats} gives details of how the synthetic catalogues are generated. The method is applied to these catalogues in Section \ref{sec_sims} to assess its performance. Section \ref{sec_summary} gives a summary of the findings of this paper to act as recommendations for applying the method to real data. Throughout this paper, the following cosmological parameters are assumed; ${\rm H}_0=100\,{\rm h}_0=70\,{\rm km\,s}^{-1}\,{\rm Mpc}^{-1}$, $\Omega_m=0.3$, $\Omega_{\Lambda}=0.7$. All magnitudes are expressed in the AB system. \section{The method} \label{sec_method} The method divides a galaxy's history into discrete blocks of time. The goal is to establish the average star formation rate (SFR) in each block to arrive at a discretised SFH that best fits the observed galaxy multi-band photometry. As shown in Section \ref{sec_sims}, the optimal number of blocks is a function of many attributes, including the number of filters in which the galaxy has been observed and the signal-to-noise (S/N) of the data. \subsection{Determination of model fluxes} \label{sec_model_fluxes} In order to proceed, a model flux must be determined in each passband from the discretised SFH to establish the goodness of fit with the observed fluxes. For the purpose of demonstration, in this paper, the synthetic spectral libraries of \citet{bruzual03} are used to compute the SED for each SFH block although the method is completely general and can be applied with any empirical or synthetic library. Starting with a simple stellar population (SSP) SED, $L_{\lambda}^{\rm SSP}$, of metallicity $Z$, a composite stellar population (CSP) SED, $L_{\lambda}^{\, i}$, is generated for the $i$th block of constant star formation in a given galaxy using \begin{equation} L_{\lambda}^{\, i} = \frac{1}{\Delta t_i} \int^{t_i}_{t_{i-1}} {\rm d}t' \, L_{\lambda}^{\rm SSP}(\tau(z)-t') \end{equation} where the block spans the period $t_{i-1}$ to $t_i$ in the galaxy's history and $\tau$ is the age of the galaxy (i.e., the age of the Universe today minus the look-back time to the galaxy). The normalisation $\Delta t_i=t_i-t_{i-1}$ ensures that the CSP has the same normalisation as the SSP which in the case of the \citet{bruzual03} libraries is one solar mass. In practice, the integration is replaced by a sum over the SSP SEDs which are defined at discrete time intervals. In the present work, this sum is carried out over finer intervals than the library provides by interpolating the SSP SEDs linearly in log($t$). Note also that this work considers mono-metallic stellar populations such that $Z$ does not vary with age. The more general problem of allowing $Z$ to evolve with time is left for future work (see Section \ref{sec_summary}). To model the effects of extinction on the final SED (i.e., the SED from all blocks in the SFH), reddening is applied. This is achieved by individually reddening the CSP of each block using \begin{equation} L_{\lambda,R}^{\, i} = L_{\lambda}^{\, i} \, 10^{-0.4 k(\lambda)A_V/R_V} \, . \end{equation} Here, $A_V$ is the extinction and $k(\lambda)$ is taken as the Calzetti law for starbursts \citep{calzetti00}, \begin{equation} k(\lambda) = \left\{ \begin{array}{l} 2.659(-2.156+\frac{1.509}{\lambda}-\frac{0.198}{\lambda^2}+ \frac{0.011}{\lambda^3})+R_V \\ \hspace{15mm} ({\rm for}\,\,\, 0.12\mu{\rm m} < \lambda < 0.63\mu{\rm m}) \\ 2.659(-1.857+\frac{1.04}{\lambda})+R_V \\ \hspace{15mm} ({\rm for \,\,\,} 0.63\mu{\rm m} < \lambda < 2.2\mu{\rm m}) \end{array} \right . \end{equation} with $R_V=4.05$ and $\lambda$ in microns. To match the wavelength range of the passbands considered in this study, it is assumed that the longer wavelength half of the function applies up to 10$\mu$m and the shorter wavelength half is extrapolated down to 0.01$\mu$m using the average slope between 0.12$\mu$m and 0.13$\mu$m. The model flux (i.e., photon count) observed in passband $j$ from a given block $i$ in the SFH when the galaxy lies at a redshift $z$ is then \begin{equation} F_{ij}=\frac{1}{4\pi d_L^{\,2}}\int {\rm d}\lambda \frac{\lambda \, L^i_{\lambda,R}(\lambda/(1+z))T_j(\lambda)}{(1+z)\, hc} \end{equation} where $d_L$ is the luminosity distance and $T_j$ is the transmission curve of passband $j$. \subsection{Determination of the most probable SFH} \label{sec_most_prob_SFH} To find the normalisations $a_i$ which result in a set of model fluxes that best fits the observed fluxes, the following $\chi^2$ function is minimised \begin{equation} \label{eq_chi_sq} \chi^2=\sum_j^{N_{\rm filt}} \frac{(\sum_i^{N_{\rm block}} \, a_i F_{ij} - F^{\rm obs}_j)^2}{\sigma_j^2} \end{equation} where $F^{\rm obs}_j$ is the flux observed in passband $j$ from the galaxy and $\sigma_j$ is its error. The sum in $i$ acts over all $N_{\rm block}$ SFH blocks. In the case of application to spectroscopic data instead of multi-band photometry, the index $j$ would refer to spectral elements rather than passbands. $F_{ij}$ would represent the flux of the model SED over the wavelength range $\lambda_j$ to $\lambda_j+\Delta\lambda$ from SFH block $i$ and $F^{\rm obs}_j$ would be the corresponding flux from the observed SED. In fact, the generality of this approach means that a combination of spectroscopic data and multi-band photometry can be used\footnote{In the case of covariant data, equation (\ref{eq_chi_sq}) would be replaced by the more general form $\chi^2=\sum_{ij}\,(x_i-y_i)\sigma_{ij}^{-1}(x_j-y_j)$, with $x_j=\sum_i\,a_i F_{ij}$, $y_j=F^{\rm obs}_j$ and where $\sigma_{ij}^{-1}$ is the inverse covariance matrix.}, the appropriate weighting being applied by $\sigma_j^2$. In any case, the total stellar mass of the galaxy is simply the sum of the mass normalisations of each block: \begin{equation} \label{eq_stellar_mass} {\rm M}_* = \sum_i^{N_{\rm block}} a_i \, . \end{equation} The minimum $\chi^2$ occurs when the condition $\partial\chi^2/\partial\,a_i=0$ is simultaneously satisfied for all $a_i$. This is a linear problem with the following solution: \begin{equation} \label{eq_matrix} \mathbf{a}=\mathbf{G}^{-1}\mathbf{d} \, . \end{equation} Here $\mathbf{a}$ is a column vector composed of the normalisations $a_i$, $\mathbf{G}$ is a $N_{\rm block} \times N_{\rm block}$ square matrix whose $ik$th element is given by \begin{equation} G_{ik}=\sum_{j=1}^{N_{\rm filt}} \, F_{ij}F_{kj}/\sigma_j^2 \end{equation} and $\mathbf{d}$ is a one dimensional vector with elements \begin{equation} d_i=\sum_{j=1}^{N_{\rm filt}} \, F_{ij}F_j^{\rm obs}/\sigma_j^2 \, . \end{equation} However, in the presence of noise, the solution given by equation (\ref{eq_matrix}) is formally ill-conditioned. This is circumvented by linear regularisation which involves adding an extra term, the regularisation matrix $\mathbf{H}$, weighted by the regularisation weight, $w$ (see Section \ref{sec_regularisation}): \begin{equation} \label{eq_matrix_reg} \mathbf{a}=(\mathbf{G}+w\mathbf{H})^{-1}\mathbf{d} \,. \end{equation} The errors on the normalisations $a_i$ are obtained from the corresponding covariance matrix which was derived by \citet{warren03} for this problem: \begin{equation} \label{eq_sfr_errors} \mathbf{C} = \mathbf{R} - w \mathbf{R} (\mathbf{R}\mathbf{H})^T \end{equation} where the definition $\mathbf{R}=(\mathbf{G}+w\mathbf{H})^{-1}$ has been made for simplicity. Unfortunately, by regularising the solution, a new problem is introduced. The effect of regularisation is to reduce the effective number of degrees of freedom by an amount that can not be satisfactorily determined. Furthermore, applying the same regularisation weight to two different models (for example different numbers of SFH blocks) results in a different effective number of degrees of freedom for each model \citep{dye05,dye07}. This means the minimum $\chi^2$ is biased away from the most probable solution. More crucially, comparison between different models cannot be carried out fairly using the $\chi^2$ statistic. For example, $\chi^2$ could not be used to identify the spectral library that best fits a set of observed fluxes from a selection of libraries. This characteristic has been ignored in previous studies. One solution to the problem is to simply not regularise. Fortunately, a better solution can be found by turning to Bayesian inference and ranking models by their Bayesian evidence instead of $\chi^2$ (see Appendix \ref{sec_app_evidence}). \citet{suyu06} derived an expression for the Bayesian evidence, $\epsilon$, for the linear inversion problem described by equation (\ref{eq_matrix_reg}). Using the previous notation, this can be written \begin{eqnarray} \label{eq_evidence} -2 \,{\rm ln} \, \epsilon &=& \chi^2 -{\rm ln} \, \left[ {\rm det} (w\mathbf{H})\right] +{\rm ln} \, \left[ {\rm det} (\mathbf{G}+w\mathbf{H})\right] \nonumber \\ & & + \, w\mathbf{a}^T\mathbf{H\,a} + \sum_{j=1}^{N_{\rm filt}}{\rm ln} (2\pi \sigma_j^2) \, \end{eqnarray} with $\chi^2$ given by equation (\ref{eq_chi_sq}). Here, the covariance between all pairs of observed fluxes has been set to zero (i.e., it is assumed all fluxes are independent of each other. For covariant data, the more general form given by \citet{suyu06} would be used). The evidence is a probability distribution in the model parameters and regularisation weight, $w$, allowing different models to be ranked fairly to find the most probable model. Formally, the evidence should be marginalised over $w$ and the result used in the ranking. However, \citet{suyu06} noted that the distribution function for $w$ can be approximated as a delta function centred on the optimal regularisation weight, $\hat{w}$. This is a reasonable simplification since $\hat{w}$ is a distinct value estimable from the data. With this simplification, the maximised value of the evidence at $\hat{w}$ can be directly used to rank models rather than having to maximise the more computationally demanding marginalised evidence (see Appendix \ref{sec_app_evidence}). This approximation has been adopted in the present study. \subsection{Maximisation procedure} \label{sec_max_proc} The complete process of establishing the most probable SFH when the galaxy's redshift, extinction and metallicity ($z$, $A_V$, $Z$) are unknown is most conveniently separated into three nested levels of inference \citep[e.g., see the general approach to Bayesian inference by][]{mackay03}: \begin{itemize} \item[$\bullet$] In the innermost level, the most likely SFH for a given $z$, $A_V$, $Z$ and number of SFH blocks, $N_{\rm block}$, as well as a given regularisation weight, $w$, is determined with the linear inversion step outlined in the previous section. \item[$\bullet$] In the second level, the most probable $w$ is determined for a given $z$, $A_V$, $Z$ and $N_{\rm block}$ by maximising the evidence given in equation (\ref{eq_evidence}). Quantitatively, this means that equation (\ref{eq_matrix_reg}) must be evaluated every time $w$ is varied in the evidence maximisation. \item[$\bullet$] Finally, in the third and outermost level, the set of parameters $z$, $A_V$, $Z$ and $N_{\rm block}$ which maximise the evidence from the second level are found. \end{itemize} In this paper, a more specific case is considered where $z$ and $A_V$ are known for each galaxy\footnote{Note that assuming prior knowledge of the extinction is not equivalent to setting $A_V=0$ for all sources and ignoring it in the maximisation. Assigning non-zero extinction, despite not being maximised, allows proper exploration of any systematics or SED degeneracies that might exist.}. Such a scenario might arise, for example, if these parameters have been provided without spectroscopic data, if a spectrum is available but over a wavelength range too narrow to obtain a reliable SFH, or if the parameters are known globally for a group or cluster of galaxies but SFHs are required for individual galaxies. In addition, mono-metallic stellar populations are considered in this work, such that $Z$ remains constant at all times throughout the galaxy's history (see Section \ref{sec_summary} for a discussion of the more general problem). With these constraints, the third level of inference therefore requires varying only $N_{\rm block}$ and $Z$. The reason for segregating $w$ into a separate second level of inference, rather than combining it with $z$, $A_V$, $Z$ and $N_{\rm block}$ is twofold. Firstly, it is not a formal parameter of the fit. Its optimal value is an indication of the quantity of information the data contain. In Bayesian terms, regularisation takes the role of a prior since it corresponds to an a priori assumption regarding the smoothness of the solution (see Appendix \ref{sec_app_evidence}). Secondly, there is a practical consideration. As \citet{dye08} discuss, the value of $w$ that maximises the evidence in the second level of inference varies only slightly with different trial sets of model parameters in the third level. This means that one can alternate between varying $w$ whilst fixing $N_{\rm block}$ \& $Z$ and varying $N_{\rm block}$ \& $Z$ whilst fixing $w$. Alternating between two separate levels in this way increases the efficiency of the maximisation. Furthermore, by starting the maximisation with $w$ held fixed at a large value, the evidence varies more smoothly with $N_{\rm block}$ \& $Z$. This gives an additional improvement in the speed with which the global maximum can be found and reduces the risk of becoming stuck at local maxima. In this paper, the alternating maximisation method described is applied, stopping once the evidence has converged. In principle, as many parameters as desired can be added in the second step above. For example, one might like the duration of some or all of the SFH blocks to vary. Of course, the limiting factor is ultimately the number of photometric data points. Adding more parameters in the second stage results in the evidence being maximised at lower SFH resolutions. Therefore, to maximise SFH resolution, spacing is kept fixed in this work. SFH blocks are assigned a duration $c\,b^{-i}$, where $i$ is the block number (increasing with age), $c$ is a stretch factor, always set to make the end of the last SFH block coincide with the age of the galaxy and the parameter $b$ is set to 1.5. This exponential spacing allocates smaller periods at later times to account for the fact that a galaxy's SED is more strongly influenced by more recent star formation activity. When finding the most probable value of $w$, the downhill simplex method is used, minimising the quantity $-{\rm ln} \,\epsilon$. However, $N_{\rm block}$ is a discontinuous parameter hence to find the most probable $N_{\rm block}$, the evidence is computed across a range of values of $N_{\rm block}$ and that which maximises the evidence is selected. In this way, the optimal number of SFH blocks are automatically selected by the data. Maximising the evidence is a more natural and simplified alternative to the iterative procedure used by \citet{tojeiro07} for determining the optimal number of SFH intervals. This also simplifies the method used by \citet{ocvirk06} for determining the level of regularisation. On a 3 GHz desktop computer, the full process of determining the regularisation weight, metallicity and number of SFH bursts that simultaneously maximise the evidence takes approximately three to four seconds per galaxy for the largest filterset considered in this work comprising 13 filters (see Section \ref{sec_synthetic_cats}). \subsection{Regularisation} \label{sec_regularisation} In a Bayesian framework, regularisation takes the role of a prior by assuming a smooth SFH. The effect is to smear out noisy spikes in the solution. A downside is that real bursts that occur on a short timescale are also smeared. However, the goal of adopting a relatively coarsely binned SFH is to recover longer timescale events, aiming for reliability rather than a high SFH resolution. Furthermore, regularisation is necessary to ensure that the linear solution given by equation (\ref{eq_matrix}) is well defined. Regularisation is achieved by adding an extra term to $\chi^2$ so that the figure of merit becomes $\chi^2+B$. Generally, if this term can be written \begin{equation} \label{eq_reg_term} B=\sum_{i,k}\,b_{ik}a_i\,a_k \end{equation} where the $b_{ik}$ are constants, then the solution remains linear since its partial derivative with respect to all normalisations $a_i$ is linear in $a$. The elements of the regularisation matrix $\mathbf{H}$ introduced in Section \ref{sec_most_prob_SFH} are related to the regularisation term $B$ via \begin{equation} 2\,H_{ik}=\frac{\partial^2 B}{\partial a_i \partial a_k} \, . \end{equation} The most basic form of regularisation, known as {\em zeroth order} regularisation, is obtained by setting $b_{ik}=\delta_{ik}$. In this case, the regularisation term to be minimised becomes $B=\sum_i\,a_i^2$. In {\em first order} regularisation, the regularisation term is written $B=\sum_i\,(a_i-a_{i+1})^2$ and for {\em second order}, $B=\sum_i\,(2a_i-a_{i-1}-a_{i+1})^2$. In principle, the most appropriate type of regularisation to apply can be decided by the Bayesian evidence. However, in this work, for simplicity and to keep the number of non-linear parameters to a minimum, the regularisation type was fixed. Zeroth-order regularisation was rejected on the grounds that it prefers non-physical, null SFH solutions. Tests revealed that second order regularisation results in slightly more accurate SFHs than first order on average, hence second order was applied to all reconstructions in this paper. A final consideration regarding regularisation is that the matrix $\mathbf{H}$ must not be singular. Ensuring the non-singularity of $\mathbf{H}$ ensures that the evidence, which depends on ln[det($\mathbf{H}$)], can be calculated. To guarantee non-singularity, the following was used for the regularisation term: \begin{eqnarray} B&=&(a_{N_{\rm block}}-a_{N_{\rm block}-1})^2 + (a_1-a_2)^2\nonumber \\ &+& \sum_{i=2}^{N_{\rm block}-1}\,(2a_i-a_{i-1}-a_{i+1})^2 \, . \end{eqnarray} \section{Synthetic catalogues} \label{sec_synthetic_cats} The performance of the SFH reconstruction method was tested by applying it to a suite of different synthetic galaxy catalogues. The suite was designed to encompass a range of SFHs and filter sets for assessing how the recovered SFH and total stellar mass depends on each permutation. All catalogues were constructed using \citet{bruzual03} SED libraries with the 1994 Padova evolutionary tracks \citep{bertelli94} and Salpeter initial mass function \citep{salpeter55} using the method outlined in Section \ref{sec_model_fluxes}. Although the exact numerical results will depend on the library used, the observed global trends would be expected to hold true generally. Four different SFH types were considered. These were chosen to establish how the reconstruction fares with the presence/absence of early and/or late star formation activity. The four SFH types are: \begin{itemize} \item {\em Early burst} - The early burst SFH starts with a high SFR from the moment the galaxy is born followed by an exponential decay. After approximately 40\% of the galaxy's age, the decay ceases leaving a small SFR that remains constant for the remainder of the galaxy's history. \item {\em Late burst} - This SFH has a small constant SFR from birth up until approximately 90\% of the galaxy's age. At this point, it undergoes an instantaneous burst which exponentially decays back to the small constant SFR the galaxy experienced prior to the burst. \item {\em Dual burst} - This is the early burst SFH with the last 10\% of the history replaced with the late busrst SFH. \item {\em Constant SFR} - The SFR is constant throughout the entire history for this SFH. \end{itemize} The different SFHs are plotted in Figure \ref{input_SFHs} with a SFR scale that corresponds to the creation of 1M$_\odot$ over the history of the galaxy. Absolute SFRs for each galaxy are determined by normalising to the absolute $R$ band magnitude as described below. The early and late bursts are designed to fit entirely within their respective early and late SFH blocks used for re-binning in Section \ref{sec_sims} (see Figure \ref{input_SFHs}). Although the early burst creates approximately four times the stellar mass created in the late burst, the bolometric luminosity of the late burst is ten times that of the early burst. Figure \ref{seds_and_filters} plots the SED corresponding to each SFH type. \begin{figure} \epsfxsize=8cm {\hfill \epsfbox{input_SFHs.eps} \hfill} \epsfverbosetrue \caption{The four different SFHs used in the creation of the synthetic galaxy catalogues. The fractional time runs from the big bang to the epoch at the galaxy's redshift. The dashed lines in the top panel indicate the blocks within which all reconstructed SFHs are re-sampled. The early and late bursts are designed to fit entirely within the first and last of these blocks respectively. The bolometric luminosity of the early burst is approximately one tenth that of the late burst.} \label{input_SFHs} \end{figure} For every SFH type, four galaxy catalogues were generated, each using one of the following filtersets: \begin{itemize} \item[$\bullet$] {\em Full set} -- $U$, $B$, $V$, $R$, $I$, $Z'$, $J$, $H$, $K$, $3.6\mu$m, $4.5\mu$m, $5.8\mu$m and $8\mu$m. The full set contains all 13 broad band filters considered in this paper. The last four bands are those of the Infra Red Array Camera (IRAC) on board the {\sl Spitzer Space Telescope}. \item[$\bullet$] {\em Half set} -- $B$, $R$, $I$, $J$, $K$, $3.6\mu$m and $4.5\mu$m. The half set spans a slightly narrower range of wavelengths than that spanned by the full set and contains half the number of filters. This set also omits the 5.8 and 8$\mu$m IRAC bands which are in practice dominated by dust and PAHs. \item[$\bullet$] {\em Optical set} -- $U$, $V$, $R$, $I$, $Z'$. This set is included to match the set of filters used by the SDSS. \item[$\bullet$] {\em Infra-red set} -- $Z'$, $J$, $H$, $K$, $3.6\mu$m and $4.5\mu$m. This set is purely to assess how well infra-red data fares without optical band photometry. \end{itemize} The filter transmission curves are plotted in Figure \ref{seds_and_filters} for comparison with the four different SFH type SEDs. \begin{figure*} \epsfxsize=14cm {\hfill \epsfbox{seds_and_filters.eps} \hfill} \epsfverbosetrue \caption{Synthetic SEDs corresponding to the early burst, late burst, dual burst and constant SFR histories (see Figure \ref{input_SFHs}) for a galaxy at $z=0$ and with $Z=0.1Z_{\odot}$, $A_V=0$. SEDs are plotted normalised to the same $R$ band flux. The filter transmission efficiency is shown for comparison and is correctly scaled. The total throughput in each passband is given by scaling all filters by an additional global system efficiency of 70\% (see text). Left ordinate is plotted on a log scale and applies to the SEDs, right ordinate is linear and applies to the filter curves.} \label{seds_and_filters} \end{figure*} Each of the 16 catalogues was populated with 1000 galaxies with random redshifts, metallicities and extinctions. For each galaxy, apparent magnitudes were generated following these steps: \begin{itemize} \item[1)] Assign a random absolute $R$ band magnitude distributed according to the $R$ band luminosity function of \citet{wolf03} described by a Schechter function \citep{schechter76} with parameters $M_*=-20.70+5{\rm lg \,h_0}$, $\alpha=-1.60$. \item[2)] Assign a random redshift drawn from the probability distribution function $z \,\, {\rm exp} \, (-z^2/4)$ within the range $0<z<6$. \item[3)] Assign a random extinction drawn from a uniform distribution within the range $0<A_V<3$. \item[4)] Assign a random metallicity from a uniform logarithmic distribution within the range $0.005 < Z/Z_{\odot} < 2.5$. (Note that this work assumes mono-metallic SFHs, i.e., $Z$ is held constant over the galaxy's entire history. See Section \ref{sec_summary}). Linear interpolation in log($Z$) between the discrete metallicity library SEDs ensures a continuous distribution in $Z$. \item[5)] Compute the apparent $R$ band magnitude using $z$, the absolute $R$ band magnitude from step 1) and the K-correction from the appropriate synthetic SED. \item[6)] Compute fluxes in all passbands using the appropriate redshifted, reddened but arbitrarily scaled synthetic SED. \item[7)] Normalise each passband flux by the factor needed to scale the $R$ band flux computed in step 6) to the apparent $R$ band magnitude computed in step 5). Fluxes at this point are in units of photons/s/m$^{2}$. \item[8)] Assuming a telescope collecting area of 64m$^2$ for filters $U$ to $K$ and 0.6m$^2$ for the four IRAC bands, an integration time of 1800s per filter and an overall system efficiency of 70\% in all filters, compute Poisson errors for each flux. \item[9)] Scatter fluxes by their errors computed in step eight then convert the resulting fluxes and their errors to AB mags. \end{itemize} Once the photometry is computed for a given source in this way, the number of filters with non-detections, defined by a flux significance of $<10\sigma$, are counted. Sources that are not detected in at least 70\% or five (whichever is larger) of the filters contained within the set are rejected. Sampling continues in this way until 1000 objects have been generated for the catalogue. The 1800s exposure per filter and telescope collecting area assumed in step eight above correspond to the following 10$\sigma$ magnitude sensitivity limits: 26.5, 25.9, 25.6, 25.0, 24.8, 23.9, 22.8, 22.2, 22.6, 24.1, 23.5, 21.4, 21.3 in $U$, $B$, $V$, $R$, $I$, $Z'$, $J$, $H$, $K$, $3.6\mu$m, $4.5\mu$m, $5.8\mu$m and $8\mu$m respectively. Non-detections are assigned an apparent magnitude equal to the sensitivity limit of the corresponding filter and an error of 0.5 mag. The system efficiency assumed in step 8) applies in addition to the absolute filter transmission efficiencies indicated in Figure \ref{seds_and_filters} (this brings the IRAC filters to the correct total passband throughputs and accommodates typical optical and IR camera throughputs). \begin{figure} \epsfxsize=8cm {\hfill \epsfbox{R_vs_z_early_burst.eps} \hfill} \epsfverbosetrue \caption{An example of the variation of apparent $R$ band magnitude with redshift for all objects in one of the early burst catalogues. The continuous line shows how $R$ varies with redshift for a M=-22.0 early burst galaxy with $Z=0.1Z_{\odot}$ and $A_V$=0. The dashed lines indicate bins within which objects were selected for the analyses of Section \ref{sec_app_to_all}. The magnitude bin $24<R<25$ selects objects with an approximately constant photometric S/N over as large a range in redshift as possible, whilst the redshift bin $1<z<2$ optimises both the number of objects and their S/N range.} \label{R_vs_z_early_burst} \end{figure} In Section \ref{sec_app_to_all} the effect of photometric S/N and redshift on the reconstructed SFHs and stellar masses is investigated. Two catalogue sub-sets were therefore defined to achieve this. To test dependency on S/N with as little variation in redshift as possible, sources within $1<z<2$ were selected. To test dependency on as large a range in redshift as possible at approximately the same S/N, sources were selected within $24<R<25$. These sub-sets are shown in Figure \ref{R_vs_z_early_burst} where the apparent $R$ band magnitude is plotted against $z$ for 1000 sources generated using the early burst SFH. \section{Simulation results} \label{sec_sims} This section discusses application of the SFH reconstruction method to synthetic catalogues to assess its performance. An initial demonstration of setting the optimal regularisation is given in Section \ref{sec_reg_effect} before applying the method to the full range of catalogues in Section \ref{sec_app_to_all}. \subsection{The effect of regularisation} \label{sec_reg_effect} The effect of regularisation is demonstrated with an example. Using the late burst SFH, synthetic photometry was generated in the full filterset for a galaxy at $z=1$ with absolute $R$ band magnitude $M_R=-20$, $A_V=0$ and $Z=0.1Z_{\odot}$. The resulting stellar mass of the galaxy was $9.7\times 10^9 \, {\rm M}_{\odot}$. The SFH reconstruction method was then applied for different degrees of regularisation. In each case, the SFH was divided into five exponentially spaced blocks as indicated in the top panel of Figure \ref{input_SFHs} (see Section \ref{sec_max_proc}). For comparison with the reconstructed SFHs, the input SFH was binned into the same five exponentially spaced blocks. \begin{figure} \epsfxsize=8cm {\hfill \epsfbox{example_reg.eps} \hfill} \epsfverbosetrue \caption{Demonstration of the effect of different regularisation weights, $w$, on the reconstructed SFH. This example is based on a synthetic source lying at $z=1$ with $Z=0.1Z_{\odot}$, $M_R=-20$ and $A_V=0$. The reconstruction uses the full set of 13 filters. The input SFH is the late burst model shown here by the thick grey dashed line, binned into five SFH blocks. The optimal regularisation weight found by maximising the Bayesian evidence produces the most accurate reconstructed SFH (continuous line). Under-regularisation (dot-dashed line) results in a very inaccurate SFH reconstruction whereas over-regularisation (thin dashed line) smooths the SFH too heavily. For clarity, the standard errors returned by equation (\ref{eq_sfr_errors}) are shown only for the optimally regularised case. In all cases, the points are placed at the SFH block centres.} \label{example_reg} \end{figure} Figure \ref{example_reg} shows how accurately the input late burst SFH was reconstructed with three different values of the regularisation weight, $w$. One of these values is the optimal weight, $w=1.5\times 10^{-4}$, as determined by the maximal evidence, whilst the remaining two were set higher and lower than this by $\sim 3$ dex. In the figure, the input binned SFH is shown by the heavy dashed line. Clearly, the optimal regularisation weight gives the most accurate reconstruction. Over-regularisation smooths the SFH too heavily, leading to a biased reconstructed SFH. Conversely, under-regularisation gives rise to a catastrophic failure, with the SFH ringing violently about the input SFH. The exercise also serves to demonstrate that the reconstructed stellar mass (computed using equation \ref{eq_stellar_mass}) depends on $w$. Comparing with the stellar mass of the input galaxy of $9.7\times 10^9 \, {\rm M}_{\odot}$, the optimally regularised case recovered a mass of $(9.9\pm0.3)\times 10^9 \, {\rm M}_{\odot}$, the under-regularised case recovered $(1.61\pm0.06)\times 10^{10} \, {\rm M}_{\odot}$ and the over-regularised case recovered $(8.9\pm0.2)\times 10^9 \, {\rm M}_{\odot}$. A sub-optimal regularisation weight can therefore bias the reconstructed mass. As stated previously, the actual number of SFH blocks is always higher than the effective number of blocks when regularising due to the smoothness constraints imposed on the SFH. To reiterate, this is why the evidence should be the statistic used to rank models rather than $\chi^2$. These constraints increase the covariance between pairs of SFH blocks although the effect is counteracted by the evidence which selects fewer SFH blocks (and hence less covariant solutions) when the data do not support a high resolution SFH. Inspection of many realisations of the covariance matrix (excluding failed reconstructions -- see next section) indicates that highly covariant solutions do not occur. A further observation is that the early blocks are always more covariant than the later blocks. \subsection{Application to the full suite of catalogues} \label{sec_app_to_all} The SFH reconstruction method was applied to the full suite of catalogues. An assessment was made of how the accuracy of the reconstructed SFH depends on the number of filters, the wavelength range spanned by the filterset, the S/N of the photometry, the presence and/or absence of early and/or late star formation activity and redshift. For each combination of these variables, a synthetic catalogue was generated, comprising 1000 galaxies adhering to the ranges in $z$, $A_V$, $Z$ and absolute magnitude given in Section \ref{sec_synthetic_cats}. For every object in each case, the SFH was reconstructed following the procedure outlined in Section \ref{sec_max_proc}, maximising the evidence by varying the regularisation weight, number of SFH blocks and metallicity. \begin{figure*} \epsfxsize=17.8cm {\hfill \epsfbox{sfh_recon1.eps} \hfill} \epsfverbosetrue \caption{SFH reconstruction binned by redshift as labelled. SFH type is separated by column and filterset by row. Reconstructed SFHs are shown by the data points and lines (staggered for clarity) and apply to objects selected by $24<R<25$. Error bars show the standard deviation of objects in the redshift bin. Grey shaded histograms are the binned input SFHs.} \label{sfh_recon1} \end{figure*} The results show that approximately 1\% of reconstructions completely fail to recover the input SFH or galaxy parameters. The size of this fraction is independent of SFH type or filterset. These catastrophic failures occur either when the maximisation becomes stuck at an incorrect local maximum or when the maximisation fails to converge. Fortunately, these cases are easily identified by their very small evidence and large $\chi^2$. Figure \ref{lnE_vs_chisq} shows the distribution of sources in the ln$\,\epsilon$, $\chi^2$ plane for the early burst SFH and full filterset reconstruction (see next section). The catastrophic failures form the long tail extending to low $\epsilon$ and high $\chi^2$ and can be discounted by retaining only objects with ln$\,\epsilon>0$ and $\chi_r^2<4$. In all analyses hereafter, this cut has been applied. The figure also serves to illustrate that there is not a clear relationship between the evidence and $\chi^2$, i.e., minimising $\chi^2$ is by no means equivalent to minimising $- {\rm ln}\,\epsilon$. \begin{figure} \epsfxsize=7cm {\hfill \epsfbox{lnE_vs_chisq.eps} \hfill} \epsfverbosetrue \caption{Distribution of 1000 reconstructions in the plane spanned by log-evidence and reduced $\chi^2$ for the early burst SFH and full filterset. Catastrophic failures lie in the tail extending to low $\epsilon$ and high $\chi_r^2$ and are removed in all analyses in this paper using the limits ln$\,\epsilon>0$ and $\chi_r^2<4$ indicated by the dashed lines.} \label{lnE_vs_chisq} \end{figure} \subsubsection{Dependence on filterset and SFH type} Figure \ref{sfh_recon1} shows how the method performs as a function of SFH type, filterset and redshift. Each panel corresponds to a different combination of SFH type and filterset and in every panel, the average reconstructed SFH and its standard deviation is plotted for sources in five different redshift bins: $0<z<1$, $1<z<2$, $2<z<3$, $3<z<4$ and $4<z<6$. To allow for variation in the number of preferred SFH blocks from source to source, each reconstructed SFH was finely sampled with a small fixed time step then re-binned to a common five-block SFH. An effect of the re-binning is to smear the reconstructed SFHs slightly, particularly when re-binning from a lower number of blocks. However, comparing with SFHs averaged over only those sources preferring five bins shows that this effect is relatively minor with no more than five per cent of the total stellar mass being smeared between any pair of bins in all cases. The results plotted in Figure \ref{sfh_recon1} illustrate that the SFH type and filterset have a strong influence on the accuracy with which the input SFH can be recovered. In terms of the filters, the full set unsurprisingly performs best. However, a mildly surprising find is that the half set gives very similar average SFHs, albeit with $\sim 30\%$ larger scatter on average. Clearly, the wavelength range spanned by the filterset is the important factor, rather than the existence of an extra six intermediate photometric points provided by the full set. Furthermore, the IR end of the filterset is more important than the optical end, as indicated by the bottom two rows of Figure \ref{sfh_recon1}. The optical SDSS-like set performs poorly, significantly worse than the IR set. Only in the specific case of the late burst does the SFH reconstructed using optical photometry consistently resemble the input SFH, but this can not be reliably distinguished from the other cases. In terms of the SFH type, the late burst and constant SFHs are reconstructed the most faithfully although the late burst is smeared slightly towards earlier times. The early burst reconstructions are more strongly smeared over the first few bins, giving rise to less star formation at early times and more during their mid-history than actually occurred. On average, $\sim 20\%$ of the stellar mass created in the early burst is smeared into the later blocks. The stronger smearing exhibited by the early burst is a result of its bolometric luminosity being ten times smaller than that of the late burst. Nevertheless, the reconstructed SFHs still prove a useful diagnostic for the presence of early star formation activity, showing a clear excess that declines with time to accurately reproduce the latest SFR (with the exception of the optical filterset which fails to recover a decline at all redshifts). The dual burst proves the most challenging of SFH types to reconstruct. In this case, the full and half filtersets best recover the early and late bursts, implying the necessity of both optical and IR filters, although sources at $z<1$ have more strongly smeared SFHs. Again, this demonstrates the importance of the IR filters. \begin{figure*} \epsfxsize=17.8cm {\hfill \epsfbox{sfh_recon2.eps} \hfill} \epsfverbosetrue \caption{SFH reconstruction binned by magnitude as labelled. SFH type is separated by column and filterset by row. Reconstructed SFHs are shown by the data points and lines (staggered for clarity) and apply to objects selected by $1<z<2$. Error bars show the standard deviation of objects in the redshift bin. Grey shaded histograms are the binned input SFHs.} \label{sfh_recon2} \end{figure*} Note that the effect of regularisation on the average of a sample of SFHs is twofold. A stronger regularisation weight reduces the scatter in the sample, whilst more heavily smoothing the average SFH. This effect can be seen to an extent by comparing the reconstructed early burst SFH for the full and half filtersets in Figure \ref{sfh_recon1}. The error bars on points in the first bin with the half filterset are of equal size or smaller than the error bars of the first bin with the full set. However, the SFHs are more heavily smeared with the half set. \subsubsection{Dependence on S/N and redshift} Figure \ref{sfh_recon1} shows that in nearly all cases, the variation in reconstructed SFHs between different redshift bins is comparable to or less than the intrinsic SFH scatter within a given bin. Generally, the low redshift sources (selected by say $z<2$) tend to have more smeared SFHs than their higher redshift equivalents. This is consistent with the fact that at $z\simgreat 2$, the rest-frame UV is redshifted into the optical wavebands where SEDs are much more sensitive to stellar age (see Figure \ref{seds_and_filters} -- note that the optical filterset performs worst despite this since it lacks the SED normalisation provided by the IR filters). Furthermore, since the SFHs in Figure \ref{sfh_recon1} are computed for sources selected by $24<R<25$ (i.e., they have approximately the same photometric S/N), the flux received by the IRAC filters increases with redshift, providing more discrimination at the IR end of the SED. Figure \ref{sfh_recon2} shows reconstructed SFHs for the different combinations of filterset and SFH type, but this time objects are binned by apparent magnitude. All objects are selected by $1<z<2$ to maximise the number of objects whilst maintaining a large span in apparent magnitude and thus S/N. As the figure shows, there is little variation with S/N. The averaged SFHs are very similar, although unsurprisingly, the scatter increases as the apparent magnitude falls. As can be inferred from Figures \ref{sfh_recon1} and \ref{sfh_recon2}, the reconstructed SFH can give rise to negative SFRs. This is especially true of the inadequate optical filterset. With the other three filtersets, negative SFRs still occur but such cases; 1) tend to be limited to galaxies with low S/N photometry, 2) are always consistent with a null SFR, 3) are relatively infrequent due to the optimal regularisation strength selected by the evidence. \subsubsection{Recovery of stellar mass and metallicity} \begin{figure*} \epsfxsize=14cm {\hfill \epsfbox{mass_cf.eps} \hfill} \epsfverbosetrue \caption{Accuracy of reconstructed mass. Each panel corresponds to a different filterset as labelled. For each filterset, the reconstructed mass is plotted against the input mass for the early burst SFH, late burst SFH (reconstructed mass $\times 10$), dual burst SFH (reconstructed mass $\times 100$) and constant SFR (reconstructed mass $\times 1000$). Tables in the bottom right of each panel list the fractional scatter $\left<({\rm M}_{\rm recon}-{\rm M}_{\rm input})^2/ {\rm M}_{\rm input}^2\right>^{1/2}$ and the bias $\left<({\rm M}_{\rm recon}-{\rm M}_{\rm input})/ {\rm M}_{\rm input}\right>$ for the different SFH types.} \label{mass_cf} \end{figure*} Figure \ref{mass_cf} shows the recovered stellar mass as a function of the input mass for the different combinations of SFH type and filter set. In the lower right hand corner of each panel, a table lists the fractional scatter $\left<({\rm M}_{\rm recon}-{\rm M}_{\rm input})^2/ {\rm M}_{\rm input}^2\right>^{1/2}$ and the bias $\left<({\rm M}_{\rm recon}-{\rm M}_{\rm input})/ {\rm M}_{\rm input}\right>$ for each SFH type. As expected, the full filterset recovers the stellar mass most accurately (smallest bias) and with the least scatter. However, all cases show a negative bias such that the recovered mass is on average less than the input mass. For the full filterset, this bias ranges from $\sim 3\%$ for the constant SFR to $\sim 13\%$ for the dual burst SFH. The largest bias of $\sim 40\%$ occurs with the early burst SFH and optical filterset. However, in all cases, the bias is less than the fractional scatter. Compared with the full filterset reconstructions, the half set again performs very well given the reduction from 13 filters to seven. The fractional scatter of the half set is higher than that of the full set by $\sim 25\%$ on average. Similarly, the IR filterset results in an increased fractional scatter of only $\sim 30\%$ compared to the full set on average. The optical filterset gives a significantly larger scatter of around four times that of the full set or three times the IR set on average, confirming the well known fact that IR photometry is essential for the accurate measurement of stellar mass. In terms of the dependence of mass recovery on SFH type, the constant SFR masses show the smallest bias, closely followed by those of the late burst (although the late burst gives rise to significantly more scatter). The early burst masses tend to be more accurately reconstructed than the late burst or dual burst masses, especially in the case of the IR filterset where they are recovered almost as accurately as the full filterset case. This demonstrates the importance of IR filters for measuring stellar mass created in early bursts. \begin{figure*} \epsfxsize=16cm {\hfill \epsfbox{Z_cf.eps} \hfill} \epsfverbosetrue \caption{Accuracy of reconstructed metallicity. Each panel corresponds to a different filterset as labelled. For each filterset, the recovered metallicity is plotted against the input metallicity for the early burst SFH, late burst SFH (reconstructed $Z$ $\times 10^2$), dual burst SFH (reconstructed $Z$ $\times 10^4$) and constant SFR (reconstructed $Z$ $\times 10^6$). Tables in the bottom right of each panel list the fractional scatter $\left<(Z_{\rm recon}- Z_{\rm input})^2/ Z_{\rm input}^2\right>^{1/2}$ and the bias $\left<(Z_{\rm recon}-Z_{\rm input})/ Z_{\rm input}\right>$ for the different SFH types.} \label{Z_cf} \end{figure*} Figure \ref{Z_cf} plots the recovered metallicity as a function of the input metallicity for all SFH types and filtersets. The scatter in the recovered metallicity, particularly at low $Z$ ($<0.1 Z_{\odot}$), is larger than the scatter seen in the reconstructed mass but the global trends are essentially the same. The full filterset recovers metallicity most accurately, the half filterset and IR filterset having a scatter larger by $\sim 60\%$ and $\sim 120\%$ respectively on average. The very large scatter exhibited by the optical filterset demonstrates that recovery of metallicity without IR filters is extremely unreliable. In all cases, the recovered metallicity is larger than the input value, although similar to the recovered mass, this bias is always significantly lower than the scatter. \subsubsection{SFH resolution} In the previous sections, SFHs were re-binned to bring them to a common resolution of five blocks to enable comparison between reconstructions. In this section, dependency of the reconstructed SFH resolution (i.e., number of blocks, $N_{\rm block}$) on data quality, SFH type and filterset is considered. \begin{figure} \epsfxsize=8.2cm {\hfill \epsfbox{nblock_hist.eps} \hfill} \epsfverbosetrue \caption{Distribution of the optimal number of SFH blocks, $N_{\rm block}$, chosen by the Bayesian evidence for different redshift and magnitude selections, SFHs and filtersets. The reference selection shown by the unshaded histogram in each panel satisfies the criteria $1<z<2$ and $R>24$ with the full filterset and early burst SFH.} \label{nblock_hist} \end{figure} Figure \ref{nblock_hist} shows how the distribution of $N_{\rm block}$ varies as the data vary. The top panel shows that higher S/N data allow a higher SFH resolution, sources selected by $R<23$ preferring five to six blocks on average, compared with $R>24$ sources preferring an average of four to five blocks. The panel second from top shows how the resolution varies as a function of redshift for sources of approximately constant S/N ($R>24$). The differences are not significant, with sources across all redshifts preferring five bins on average. The third panel from top in Figure \ref{nblock_hist} shows how the SFH resolution depends on SFH type. In this case, there are more significant differences. The late burst, dual burst and constant SFR histories allow a higher resolution of six blocks on average, compared to the early burst of five. Finally, the bottom panel shows the dependence of resolution on filter set. Unsurprisingly, the full set allows the highest resolution on average, with the majority of galaxies preferring four or five SFH blocks. In comparison, the distribution in resolution of the reduced filtersets is skewed to lower numbers of SFH blocks, particularly the IR set. Clearly, there is a degeneracy between the SFH resolution and the regularisation weight, since a higher level of regularisation acts to smooth the SFH, effectively reducing its resolution. In Figure \ref{ev_contours}, two example confidence regions are shown in the plane spanned by regularisation weight and $N_{\rm block}$ computed from the Bayesian evidence. The heavy contours correspond to the late burst SFH and the thin contours the early burst SFH for a $z=1$, $Z=0.1Z_{\odot}$, $M_R=-18$ and $A_V=0$ galaxy. The inclination of both contours shows that this degeneracy does indeed exist. However, the degeneracy is weak and therefore locating the maximum in the evidence distribution is relatively straightforward. Figure \ref{nblock_hist} illustrates that the number of SFH blocks that can be recovered on average is comparable to the typical number recovered by \citet{tojeiro07} from optical spectra. However, there are two major differences with the present study that make this an unfair comparison. These are that mono-metallic populations are considered and that filtersets extend to the IR. Increasing the number of parameters to describe a time-varying metallicity will reduce the number of SFH blocks that can be recovered (see Section \ref{sec_summary}). Similarly, the IR filters provide extra constraints on the SFH, allowing reconstruction at a slightly higher resolution. As Figure \ref{nblock_hist} shows (for the early burst, but this applies generally), the full filterset recovers more SFH blocks on average than the optical set even though the optical set is similar but lacking the IR bands. \begin{figure} \epsfxsize=8.0cm {\hfill \epsfbox{ev_contours.eps} \hfill} \epsfverbosetrue \caption{Confidence limits on regularisation weight, $w$, and number of SFH blocks, $N_{\rm block}$ for a $z=1$, $Z=0.1Z_{\odot}$, $M_R=-18$ and $A_V=0$ galaxy generated using the early burst SFH (thin contours) and late burst SFH(thick contours). Contours are computed from the evidence and correspond to 68\%, 95.4\% and 99.7\% confidence levels.} \label{ev_contours} \end{figure} \section{Summary} \label{sec_summary} The primary aim of this study has been to assess reconstruction of discretised SFHs using a new method applied to multi-band photometric data. Although not tested in this paper, the method can also be applied to spectroscopic data as well as a mixture of both spectroscopic and multi-band data. The method differs from existing methods by maximising the Bayesian evidence instead of minimising $\chi^2$ (or maximising the posterior probability). For regularised solutions, the evidence gives the unbiased relative probability of the fit between different model parameterisations. This is unlike the $\chi^2$ statistic which suffers from an ambiguous number of degrees of freedom that changes between parameterisations when regularisation is applied. This work has demonstrated that the evidence allows the data to correctly and simultaneously set the optimal regularisation strength and the appropriate number of blocks in the reconstructed SFH. Although negative SFRs can arise, the optimal level of regularisation ensures that the fraction of such cases is low. Negative SFRs are limited mainly to galaxies with low photometric S/N and inadequate filter sets (e.g., the optical set considered in this work). Provided the filter set is adequate, negative SFRs are always consistent with a null SFR. This approach may be preferable to schemes that enforce positive SFRs. Enforcing positivity not only risks artificial ringing and biasing in the reconstructed SFH, it also hides problems that give rise to negative SFRs. Application of the method to a range of synthetic galaxy catalogues generated with varying passband sets and SFHs demonstrates that use of multi-band data in constraining SFHs is feasible along with certain caveats. The scatter seen in the SFHs reconstructed in this work shows that occasional significant inaccuracies can occur even with a comprehensive filterset that extends up to near-IR and mid-IR wavelengths. Therefore, interpretation of SFHs recovered from solely multi-band photometry on a galaxy by galaxy basis should be conducted with some caution. The mean SFH of a sample of galaxies is therefore a more reliable quantity in order to average out uncertainties although this study indicates that averaging over only four galaxies readily allows a late burst to be distinguished from an early burst. In comparison, studies using spectroscopic data show that reliable SFHs can be derived for individual galaxies. Nevertheless, multi-band photometry allows reconstruction of SFHs for many times more galaxies than spectroscopic methods for the same amount of observing time. The most important factor governing the accuracy of the reconstructed SFHs is the wavelength range spanned by the filterset. The results show little difference between two filtersets that span approximately the same wavelength range (optical to mid-IR) despite one set having half the number of filters of the other. Conversely, SFHs based on only purely optical photometry are completely unreliable, it being impossible to distinguish any of the input SFHs investigated. A filterset consisting of only near and mid IR filters ($Z'$ -- 4.5$\mu$m) allows recovery of SFHs to within a comparable accuracy to that recovered when optical filters are also included, implying that the majority of the SFH constraints are provided by near and mid-IR data (for the SFHs tested here). In terms of the ability of multi-band photometry to constrain different SFH types, the results show that apart from the case where only optical filters are used, early bursts of star formation can be differentiated from late bursts and both of these can be distinguished from dual bursts and constant SFRs. However, early bouts of star formation activity are always artificially smeared to later times in the reconstructed SFH compared to the input SFH. These findings apply specifically to the SFHs considered in this work, where the early burst gives rise to bolometric luminosity that is one tenth that of the late burst. A quick test has revealed that a stronger early burst is more accurately recovered with less smearing to late times. In addition, although the dual burst SFH used here suggests that recovery of more than two bursts would be unfeasible with the filtersets tested, bursts with more similar bolometric luminosities can be more readily recovered. This was demonstrated by \citet{ocvirk06} who showed that CSP SEDs constructed from flux normalised bursts allow a higher SFH resolution on average than SEDs constructed from mass normalised bursts. The results presented in this paper have been obtained using the \citet{bruzual03} spectral libraries. Whilst the exact values of the numerical results quoted here will depend on the specific SED library of choice, there are no compelling reasons to suggest that the observed trends would not remain valid generally. This study has considered a specific case where galaxy redshift and extinction are known prior to reconstructing the SFH. Also, mono-metallic stellar populations have been assumed where the metallicity does not evolve as the galaxy ages. Clearly, the more general problem necessitates maximising the evidence over extra parameters. The expected effect of this is that the maximum evidence would shift to lower SFH resolutions on average. Although generalising to a variable redshift and extinction is a relatively small expansion of the non-linear parameter space, incorporating a time-varying metallicity in addition results in a significantly larger and more complex non-linear parameter space. This increases the time required to locate the maximum evidence and increases the risk of becoming trapped at a local maximum. However, there are two small reprieves. The first is that the metallicity history can be regularised in a similar manner to the SFH, smoothing the evidence surface and therefore easing maximisation. The second exploits SED libraries with discrete metallicities. As shown by \cite{tojeiro07}, finding the optimal metallicity within the range spanned by two tabulated values of metallicity is also a linear problem which can be directly combined with the linear inversion of the SFH. In this way, optimising the metallicity for each SFH block reduces to searching a smaller number of discrete values. A full investigation of the general case will be presented in forthcoming work.
0806.1782
\section{Introduction} One-dimensional compressible Euler equations for the isentropic flow with damping in Eulerian coordinates read as follows: \begin{equation}\label{EDE} \begin{split} \rho_t+(\rho u)_x&=0,\\ \rho u_t+\rho uu_x +p_x&=-\rho u, \end{split} \end{equation} with initial data $\rho(0,x)=\rho_0(x)\geq 0$ and $u(0,x)=u_0(x)$ prescribed. Here $\rho$, $u$, and $p$ denote respectively the density, velocity, and pressure. We consider the polytropic gases: the equation of state is given by $p=A\rho^\gamma$, where $A$ is an entropy constant and $\gamma>1$ is the adiabatic gas exponent. When the initial density function contains vacuum, the vacuum boundary $\Gamma$ is defined as \[ \Gamma=cl\{(t,x):\rho(t,x)>0\}\cap cl\{(t,x): \rho(t,x)=0\}\, \] where $cl$ denotes the closure. A vacuum boundary is called \textbf{physical} if \begin{equation}\label{pvb} 0<|\frac{\partial c^2}{\partial x}|<\infty \end{equation} in a small neighborhood of the boundary, where $c=\sqrt{\frac{d}{d\rho}p(\rho)}$ is the sound speed. This physical vacuum behavior can be realized by some self-similar solutions and stationary solutions for different physical systems such as Euler equations with damping, Navier-Stokes equations or Euler-Poisson equations for gaseous stars. For more details and the physical background regarding this concept of the physical vacuum boundary, we refer to \cite{L2,LY2,Y}. Despite its physical importance, even the local existence theory of smooth solutions featuring the physical vacuum boundary has not been completed yet. This is because the hyperbolic system becomes degenerate at the vacuum boundary and in particular if the physical vacuum boundary condition \eqref{pvb} is assumed, the classical theory of hyperbolic systems can not be applied \cite{LY2}: the characteristic speeds of Euler equations are $u\pm c$, thus they become singular with infinite spatial derivatives at the vacuum boundary and this singularity creates a severe analytical difficulty. To our knowledge, there has been no satisfactory theory to treat this kind of singularity. The purpose of this article is to investigate the local existence and uniqueness theory of regular solutions (in a sense that will be made precise later and which is adapted to the singularity of the problem) to compressible Euler equations featuring this physical vacuum boundary. Before we formulate our problem, we briefly review some existence theories of compressible flows with vacuum states from various aspects. We will not attempt to address exhaustive references in this paper. First, in the absence of vacuum, namely if the density is bounded below from zero, then one can use the theory of symmetric hyperbolic systems; for instance, see \cite{Majda84}. In particular, the author in \cite{Sideris85} gave a sufficient condition for non-global existence when the density is bounded below from zero. When the data is compactly supported, there are two ways of looking at the problem. The first consists in solving the Euler equations in the whole space and requiring that the equations hold for all $x$ and $t \in (0,T)$. The second way is to require the Euler equations to hold on the set $\{(t,x):\rho(t,x)>0\}$ and write an equation for the vacuum boundary $\Gamma$ which is a free boundary. Of course in the first way, there is no need of knowing exactly the position of the vacuum boundary. The authors in \cite{MUK86} wrote the system in a symmetric hyperbolic form which allows the density to vanish. The system they get is not equivalent to the Euler equations when the density vanishes. This was also used for the Euler-Poisson system. As noted by the authors \cite{Makino92,MU87}, the requirement that $\rho^{\gamma-1 \over 2}$ is continuously differentiable excludes many interesting solutions such as the stationary solutions of the Euler-Poisson which have a behavior of the type $\rho \sim |x|^{1 \over \gamma -1}$ at the vacuum boundary. This formulation was also used in \cite{Chemin90} to prove the local existence of regular solutions in the sense that $ \rho^{\gamma-1 \over 2}, u \in C([0,T); H^m(\mathbb{R}^d)) $ for some $m > 1 + d/2$ and $d$ is the space dimension (see also \cite{S}, for some global existence result of classical solutions under some special conditions on the initial data, by extracting a dispersive effect after some invariant transformation, and \cite{Grassin98}). For the second way and when the singularity is mild, some existence results of smooth solutions are available, based on the adaptation of the theory of symmetric hyperbolic system. In \cite{LY1}, the local in time solutions to Euler equations with damping \eqref{EDE} were constructed when $c^\alpha$, $0<\alpha\leq 1$ is smooth across $\Gamma$, by using the energy method and characteristic method. They also prove that $C^1$ solutions cross $\Gamma$ can not be global. However, with or without damping, the methods developed therein are not applicable to the local well-posedness theory of the physical vacuum boundary. We only mention a result in \cite{XY05} for perturbation of a planar wave. For other interesting aspects of vacuum states and related problems, we refer to \cite{LY2,Y}. As the above results indicate, there is an interesting distinction between flows with damping and without damping, when the long time behavior is considered. Indeed, without damping, it was shown in \cite{LS} that the shock waves vanish at the vacuum and the singular behavior is similar to the behavior of the centered rarefaction waves corresponding to the case when $c$ is regular \cite{LY2}. On the other hand, with damping, it was conjectured in \cite{L2} that time asymptotically, Euler equations with damping \eqref{EDE} should behave like the porus media equation, where the canonical boundary is characterized by the physical vacuum boundary condition \eqref{pvb}. This conjecture was established in \cite{HMP} in the entropy solution framework where the method of compensated compactness yields a global weak solution in $L^\infty$. We point out that the difficulty coming from the resonance due to vacuum in there is very different from the difficulty that we are facing, since we want to have some regularity so that the vacuum boundary is well-defined and the evolution of the vacuum boundary can be realized. In order to understand the physical vacuum boundary behavior, the study of regular solutions is very important and fundamental; the evolution of the vacuum boundary should be considered as the free boundary/interface generated by vacuum. In the presence of viscosity, there are some existence theories available with the physical vacuum boundary: The vacuum interface behavior as well as the regularity to one-dimensional Navier-Stokes free boundary problems were investigated in \cite{LXY}. And the local in time well-posedness of Navier-Stokes-Poisson equations in three dimensions with radial symmetry featuring the physical vacuum boundary was established in \cite{J}. On the other hand, the free surface boundary problem was studied in \cite{L1} in the motion of a compressible liquid with vacuum by using Nash-Moser iteration; the physical boundary was treated in a sense that the pressure vanishes on the boundary and the pressure gradient is bounded away from zero, but the density has to be bounded away from vacuum, and thus the analysis is not applicable for the motion of gas with vacuum. In the next section, we formulate the problem and we state the main result: The vacuum free boundary problem is studied in Lagrangian coordinates so that the free boundary becomes fixed. By change of variables, the equations can be written as the first order system with non-degenerate propagation speeds that have different behaviors inside the domain and on the vacuum boundary. In order to cope with these nonlinear coefficients, which give rise to the main analytical difficulty, the new operators $V,V^\ast$ are introduced. Our theorem is stated in $V,V^\ast$ framework. \section{Formulation and Main result} We study the initial boundary value problem to one-dimensional Euler equations with or without damping for isentropic flows \eqref{EDE}. First, we impose the fixed boundary condition on one boundary $x=b\,:$ $u(t, b)=0$. The class of the initial data $\rho_0$, $u_0$ of our interest is characterized as follows: for $a\leq x\leq b$, where $-\infty<a<b\leq\infty$ \[ \begin{split} (i)&\,\,\rho_0(a)=0\,,\; 0<\frac{d}{dx}\rho_0^{\gamma-1}|_{x=a}<\infty\,;\\ (ii)& \,\;\rho_0(x)> 0 \text{ for } a<x\leq b\,;\\ (iii)&\,\int_a^b \rho_0(x)dx<\infty\,; (iv)\,\,u_0(b)=0\,. \end{split} \] The condition $(i)$ implies that the initial vacuum is physical, $(ii)$ means that $x=a$ is the only vacuum, $(iii)$ represents the finite total mass of gas, $(iv)$ is the compatibility condition with the boundary condition at $x=b$. We seek $\rho(t,x)$, $u(t,x)$, and $a(t)$ for $t\in [0,T]$, $T>0$ and $x\in [a(t),b]$, so that for such $t$ and $x$, \begin{equation*} \begin{split} & \rho(t,x) \text{ and } u(t,x) \text{ satisfy } \eqref{EDE}\,;\\ & \rho(t,a(t))=0\,;\;u(t,b)=0\,; \\ &0<\frac{\partial}{\partial x}\rho^{\gamma-1}|_{x=a(t)}<\infty\,. \end{split} \end{equation*} For regular solutions, the vacuum boundary $a(t)$ is the particle path through $x=a$. In one-dimensional gas dynamics, there is a natural Lagrangian coordinates transformation where all the particle paths are straight lines: \[ y\equiv \int_{a(t)}^x \rho(t,z)dz,\;\; a(t)\leq x\leq b\,. \] Note that $0\leq y\leq M$, where $M$ is the total mass of the gas. Under this transformation, the vacuum free boundary $x=a(t)$ corresponds to $y=0$, and $x=b$ to $y=M$; thus both boundaries are fixed in $(t,y)$. By this change of variables, \[ \partial_t=\partial_t- u\partial_y,\;\;\partial_x=\rho {\partial_y} \] the system \eqref{EDE} takes the following form in Lagrangian coordinates $(t,y)$: for $t\geq 0$ and $0\leq y\leq M$, \begin{equation} \begin{split} \rho_t+\rho^2u_y&=0\\ u_t+p_y&=-u\label{ed} \end{split} \end{equation} where $p=A\rho^\gamma$ with $\gamma>1$. The boundary conditions are given by $\rho(t,0)=0$ and $u(t,M)=0$. The physical singularity \eqref{pvb} in Eulerian coordinates corresponds to $0< |p_y|<\infty$ in Lagrangian coordinates and thus the physical vacuum boundary condition at $y=0$ can be realized as $$\rho\sim y^{\frac{1}{\gamma}}\;\text{ for }\;y\sim 0\,.$$ Euler equations (\ref{ed}) can be rewritten as a symmetric hyperbolic system \begin{equation} \begin{split} \phi_t+\mu u_y&=0\,,\\ u_t+\mu\phi_y&=-u\,,\label{eds} \end{split} \end{equation} in the variables $$\phi=\frac{2\sqrt{A\gamma}}{\gamma-1} \rho^{\frac{\gamma-1}{2}}\text{ and }\mu=\sqrt{A\gamma}\rho ^{\frac{\gamma+1}{2}}\,.$$ Note that the propagation speed $\mu$ becomes degenerate and the degeneracy for the physical singularity is given by $\mu\sim y^{\frac{\gamma+1}{2\gamma}}$. In order to get around this difficulty, we introduce the following change of variables: \[ \xi\equiv \frac{2\gamma}{\gamma-1}y^{\frac{\gamma-1}{2\gamma}}\;\text{ such that }\;{\partial_y}=y^{-\frac{\gamma+1}{2\gamma}} {\partial_\xi}\,. \] We normalize $A$ and $M$ appropriately such that the equations (\ref{eds}) take the form: \begin{equation*} \begin{split} \phi_t+(\frac{\phi}{\xi})^{\frac{\gamma+1}{\gamma-1}}u_{\xi}&=0\,,\\ u_t+(\frac{\phi}{\xi})^{\frac{\gamma+1}{\gamma-1}}\phi_{\xi}&=-u\,, \end{split} \end{equation*} for $t\geq 0$ and $0\leq \xi\leq 1$. The physical singularity condition $0<|p_y|<\infty$ is written as $0<|\phi_\xi| <\infty$. Thus we expect $\phi$ to be more or less $\xi$ for short time near $0$. Since the damping is not important for the local theory, for simplicity, we consider the pure Euler equations. Letting \[ k=k_\gamma\equiv \frac{1}{2}\frac{\gamma+1}{\gamma-1}\,, \] the Euler equations read in $(t,\xi)$ as follows: for $t\geq 0$ and $0\leq \xi\leq 1$, \begin{equation} \begin{split} \phi_t+(\frac{\phi}{\xi})^{2k}u_{\xi}&=0\,,\\ u_t+(\frac{\phi}{\xi})^{2k}\phi_{\xi}&=0\,.\label{euler} \end{split} \end{equation} The range of $k$ is $\frac{1}{2}< k<\infty$, since $\gamma>1$. When $\gamma\rightarrow 1$, $k\rightarrow \infty$ and when $\gamma=3$, we get $k=1$. Note that the propagation speed is now non-degenerate. However, its behavior is quite different in the interior and on the boundary, since $\lim_{\xi\searrow 0}\frac{\phi}{\xi}=\phi_\xi(0)$, but $\phi\leq\frac{\phi}{\xi}\leq c\phi$ if $\xi\geq \frac{1}{c}>0$. This makes it hard to apply any standard energy method to construct solutions in the current formulation. We will propose a new formulation to \eqref{euler} such that some energy estimates can be closed in the appropriate energy space. As a preparation, we first define the operators $V$ and $V^{\ast}$ associated to \eqref{euler} as follows: \begin{equation*} \begin{split} V(f)\equiv \frac{1}{\xi^{k}}\partial_\xi [\frac{\phi^{2k}}{\xi^{k}}f],\;\;\; V^{\ast}(g)\equiv-\frac{\phi^{2k}}{\xi^{k}}\partial_\xi[ \frac{1}{\xi^{k}} g]\,. \end{split} \end{equation*} for $f,\;g\in L_\xi^2$, where we have denoted $L_\xi^2[0,1]$ by $L_\xi^2$. We can think of $V$ and $V^{\ast}$ as modified first order spatial derivatives. We also incorporate the boundary condition at $\xi = 1$ in the domain of of $V$ and $V^\ast$, namely $V$ and $V^\ast$, are given as follows: \begin{equation}\label{domain} \begin{split} &\mathcal{D}(V)=\{f\in L_\xi^2: V(f)\in L_\xi^2\}\\ &\mathcal{D}(V^\ast)=\{g\in L_\xi^2: V^\ast (g)\in L_\xi^2, \ g(\xi=1) = 0 . \} \end{split} \end{equation} We also introduce the higher order operators $(V)^i$ and $(V^\ast)^i$: for $f\in \mathcal{D}(V)$ and $g\in \mathcal{D}(V^\ast)$, \begin{equation} \label{Vi} (V)^i(f)\equiv \begin{cases} (V^\ast V)^j(f) & \text{if }i=2j\\ V(V^\ast V)^j(f) & \text{if }i=2j+1 \end{cases} \end{equation} \begin{equation} \label{V*i} (V^\ast)^i(g)\equiv \begin{cases} (VV^\ast )^j(g) & \text{if }i=2j\\ V^\ast(VV^\ast )^j(g) & \text{if }i=2j+1 \end{cases} \end{equation} and the associated function spaces ${X}^{k,s}$ and ${Y}^{k,s}$ for $s$ a given nonnegative integer: \begin{equation}\label{XY} \begin{split} {X}^{k,s}&\equiv \{f\in L_\xi^2: (V)^i(f)\in L_\xi^2,\; 0\leq i\leq s\}\\ {Y}^{k,s}&\equiv \{g\in L_\xi^2: (V^\ast)^i (g)\in L_\xi^2,\;0\leq i\leq s\} \end{split} \end{equation} equipped with the following norms \[ ||f||^2_{{X}^{k,s}}\equiv \sum_{i=0}^s ||(V)^i(f)||^2_{L^2_\xi}\;\text{ and }\;||g||^2_{{Y}^{k,s}}\equiv \sum_{i=0}^s ||(V^\ast)^i(g)||^2_{L^2_\xi}\,. \] In order to emphasize the dependence of $k$, equivalently $\gamma$, we keep $k$ in the above definitions. In terms of $V$ and $V^{\ast}$, the Euler equations (\ref{euler}) can be rewritten as follows: \begin{equation} \begin{split} &\partial_t(\xi^{k}\phi) -V^{\ast}(\xi^{k}u)=0\,,\\ &\partial_t(\xi^{k}u) +\frac{1}{2k+1}V(\xi^{k}\phi)=0\,, \label{VVk} \end{split} \end{equation} with the boundary conditions \begin{equation}\label{BC} \phi(t,0)=0\:\text{ and }\:u(t,1)=0\,. \end{equation} In this new $V,V^\ast$ formulation, the system is akin to the symmetric hyperbolic system with respect to $V,V^\ast$. In particular, the zeroth energy estimates assert that this $V,V^\ast$ formulation retains the energy conservation property, which is equivalent to the physical energy in Eulerian coordinates: It is well known that the energy of Euler equations without damping is conserved for regular solutions: \[ \frac{d}{dt}\{\int_{a(t)}^b \frac{1}{2}\rho u^2 + \frac{p}{\gamma-1}dx\} =0\,, \] and in turn, since $dy=\rho dx$, it is written as, in $y$ variable, \[ \frac{d}{dt}\{\int_{0}^M \frac{1}{2} u^2 +\frac{A}{\gamma-1}\rho^{\gamma-1}dy\} =0\,. \] It is routine to check that this is equivalent to \[ \frac{1}{2} \frac{d}{dt}\{\int_0^1 \xi^{\frac{\gamma+1}{\gamma-1}}u^2+\frac{\gamma-1}{2\gamma} \xi^{\frac{\gamma+1}{\gamma-1}}\phi^2d\xi\} =0\,, \] which is exactly the zeroth energy estimates of \eqref{VVk} with respect to $V,V^\ast$: \begin{equation}\label{0} \frac{1}{2} \frac{d}{dt}\{\int_0^1 \frac{1}{2k+1}|\xi^k\phi|^2+ |\xi^k u|^2 d\xi\} =0\,, \end{equation} This verifies that this $V,V^\ast$ formulation does not destroy the underlying structure of the Euler equations. Indeed, it also captures the precise structure of the singularity caused by the physical vacuum boundary, and furthermore, it enables us to perform $V,V^\ast$ energy estimates yielding the a priori estimates and the well-posedness of the system. In order to state the main result of this article precisely, we now define the energy functional $\mathcal{E}^{k,s}(\phi,u)$ by \begin{equation}\label{energy} \begin{split} \mathcal{E}^{k,s}(\phi,u)&\equiv \frac{1}{2k+1}||\xi^k\phi||^2_{L^2_\xi} +||\xi^k u||^2_{L^2_\xi}\\ &\;+\frac{1}{(2k+1)^2}||V(\xi^k\phi)||^2_{{Y}^{k,s-1}} +||V^\ast(\xi^k u)||^2_{{X}^{k,s-1}}\,. \end{split} \end{equation} To close the energy estimates, $s$ will be chosen as $s= \lceil k \rceil +3$, where $\lceil k \rceil$ is a ceiling function, namely $\lceil k \rceil = \min \{n\in \mathbb{Z}: k\leq n\}.$ We are now ready to state the main results. \begin{theorem}\label{thm} Fix $k$, where $\frac{1}{2}<k<\infty$. Suppose initial data $\phi_{0}$ and $u_0$ satisfy the following conditions: \[ \begin{split} (\text{i})\;\mathcal{E}^{k,\lceil k \rceil +3}(\phi_0,u_0)<\infty;\; (\text{ii})\;\frac{1}{C_0}\leq \frac{\phi_0}{\xi}\leq C_0\text{ for some }C_0>1 \end{split} \] There exist a time $T>0$ only depending on $\mathcal{E}^{k,\lceil k \rceil +3}(\phi_0,u_0)$ and $C_0$, and a unique solution $(\phi,u)$ to the reformulated Euler equations \eqref{VVk} with the boundary conditions \eqref{BC} on the time interval $[0,T]$ satisfying \begin{equation*} \mathcal{E}^{k,\lceil k \rceil +3}(\phi,u)\leq 2\mathcal{E}^{k,\lceil k \rceil +3}(\phi_0,u_0)\,, \end{equation*} and moreover, the vacuum boundary behavior of $\phi$ is preserved on that time interval: \begin{equation*} \frac{1}{2C_0}\leq \frac{\phi}{\xi}\leq 2C_0\,. \end{equation*} \end{theorem}\ \begin{remark} The evolution of the vacuum boundary $x=a(t)$ is given by $\dot{a}(t)=u(t,\xi=0)$. By Theorem \ref{thm}, one can derive that $u_\xi(t,\xi)$ is bounded and continuous in $(t,x)\in [0,T]\times [0,1]$, and since $u(t,0)=\int_1^0 u_\xi(t,\xi)d\xi$, we deduce that the vacuum interface is well-defined and within short time $t\leq T$, the vacuum boundary at time $t$ stays close to the initial position with $|a(t)-a|\leq CT$ for some constant $C$ depending on the initial energy in Theorem \ref{thm}. \end{remark} \begin{remark} Different constants in the energy functional \eqref{energy} are due to the nonlinearity of \eqref{euler} or \eqref{VVk}. The structure of the equations are to be systematic after applying $V,V^\ast$ to \eqref{VVk} as in \eqref{VVVk} and thereafter. Since we are dealing with the local existence theory, one may work with the energy functional $\mathcal{E}^{k,s}=||\xi^k\phi||^2_{{X}^{k,s}} +||\xi^k u||^2_{{Y}^{k,s}}$, which is equivalent to \eqref{energy}. \end{remark} Our work is a fundamental step in understanding the long time behavior of regular solutions and the vacuum boundary of Euler equations with the physical singularity rigorously. With or without damping case, it would be interesting to study whether our solution exists globally in time. In particular, for the damping case, the physical singularity is expected to be canonical as in the porus media equation, it would be also interesting to investigate the asymptotic relationship between our solution and regular solutions to the porus media equation. Parallel to the recent progress in free surface boundary problems with geometry involved, we expect that our result can be generalized to multidimensional case, since the difficulty of the physical singularity lies in how the solution behaves with respect to the normal direction to the boundary. We leave this for future study. The physical vacuum boundary also naturally appears in Euler-Poisson equations for gaseous stars. It would be very interesting to study the behavior of solutions and the vacuum boundary under the influence of the gravitation. The method of the proof is based on a careful study of $V,V^\ast$ operators and $V,V^\ast$ energy estimates. The first key ingredient is to establish the relationship between the multiplication with $\frac{1}{\xi}$, a common operation embedded in the equations \eqref{euler} or \eqref{VVk}, and $V,V^\ast$ by using the underlying functional analytic properties of $V,V^\ast$ which can be obtained from a thorough speculation on the behavior of $V,V^\ast$ near the vacuum boundary $\xi=0$. Another essential idea is to find the right form of spatial derivatives of $\partial_t\phi$, which is critical to cope with strong nonlinearity, in particular of the second and third order equations and to close the energy estimates in the end. For large $k$, this can be done by introducing the representation formula of $(V^\ast)^i(\xi^ku)$. In the similar vein, due to the strong nonlinearity, the approximate scheme starts from the third order equation and lower order terms including $\phi$ and $u$ are recovered by integrals and boundary conditions. Lastly, we point out that it is not trivial to build the well-posedness of linear approximate systems and the duality argument is employed for that as in \cite{AM}. The rest of the paper is organized as follows: In Section \ref{3}, we study the operators $V$ and $V^\ast$ built in the reformulated Euler equations \eqref{euler} or \eqref{VVk}. In Section \ref{4}, we establish the a priori estimates in $V, V^\ast$ formulation. Based on the a priori estimates, in Section \ref{5}, we implement the approximate scheme and prove that each approximate system is well-posed. In Section \ref{6}, we finish the proof of Theorem \ref{thm}. In Section \ref{7}, the duality argument is proved. \section{Preliminaries}\label{3} Throughout this section, we assume that $\phi$ is a given nonnegative, smooth function of $t$ and $\xi$, and moreover, $\frac{\phi}{\xi}$ and $\partial_\xi\phi$ are bounded from below and above near $\xi=0$. \subsection{Basic properties of $V$ and $V^\ast$} In this subsection, we study the operators $V$ and $V^\ast$. Let us denote $C_0^\infty((0,1)) $ (respectively $C_0^\infty((0,1]) $) the set of $C^\infty$ functions with compact support in $(0,1)$ (respectively $(0,1]$). \begin{lemma} \begin{equation} \label{density} \begin{split} \overline{C_0^\infty((0,1])}^{X^{k,1}} & = \mathcal{D}(V)= X^{k,1}\:;\;\; \\ \;\, \overline{C_0^\infty((0,1))}^{Y^{k,1}} & = \mathcal{D}(V^\ast)= Y^{k,1}; \end{split} \end{equation} \end{lemma} \begin{proof} Take $f \in X^{k,1}$. Hence, \begin{equation*} \|f \|_{X^{k,1}}^2 = \int_0^1 \frac1{\xi^{2k}} |\partial_\xi (\frac{\phi^{2k}}{\xi^k} f)|^2 + |f|^2 \ d\xi < \infty. \end{equation*} We make the change of variable $y = \xi^{2k+1}$ and $F(y) = \frac{\phi^{2k}}{\xi^k} f $. Hence \begin{equation*} \|f \|_{X^{k,1}}^2 = \int_0^1 (2k+1)|\partial_y F|^2 + \frac1{2k+1} \frac{\xi^{4k}}{\phi^{4k}} \frac1{y^{4k \over 2k + 1}} |F|^2 \ dy < \infty. \end{equation*} Since, $2k > 1$, we deduce that ${4k \over 2k + 1} > 1 $ and hence, necessary $F(0) = 0$. Applying Hardy inequality to $F$, we deduce that \begin{equation*} \int_0^1 \frac{F^2}{y^2} dy \leq C \int_0^1 |\partial_y F|^2 . \end{equation*} Hence, going back to the original variables, we deduce that for $f \in X^{k,1}$, we have \begin{equation} \label{Hardy} \int_0^1 \frac{f^2}{\xi^2} \ d\xi \leq C \| \frac{\xi}{\phi} \|^{4k}_{L^\infty_\xi} \| V(f) \|_{L_\xi^2}^2\,. \end{equation} Now, consider a cut-off function $\chi \in C^\infty (\mathbb{R})$ given by $ \chi(\xi) = 0 $ for $\xi \leq 1/2 $ and $\chi (\xi ) = 1 $ for $\xi \geq 1$. We also define $\chi_n (\xi) = \chi(n \xi)$. For $f \in X^{k,1}$, we define $f_n(\xi) = \chi_n( \xi) f (\xi) $. Hence, $f_n \in X^{k,1}$ and it is clear that $f_n$ goes to $f$ in $L^2$. Moreover, \begin{equation*} V(f -f_n) = (1-\chi_n) V(f) - n \chi'(n \xi) \frac{\phi^{2k}}{\xi^{2k}} f\,. \end{equation*} The first term on the right hand side goes to zero in $L^2_\xi$ when $n$ goes to infinity. For the second term, we use the fact that \begin{equation*} \int_0^1 |n \chi'(n \xi)\frac{\phi^{2k}}{\xi^{2k}} f|^2 d\xi \leq \| \frac{\phi}{\xi} \|_{L^\infty_\xi}^{4k} \int_0^1 \frac{f^2}{\xi^2} 1_{\frac1{2n} \leq \xi \leq \frac1n } d\xi \end{equation*} which goes to zero when $n$ goes to infinity in view of \eqref{Hardy}. Hence, we deduce that $f_n $ goes to $f$ in $X^{k,1}$. Now, it is clear that $f_n$ can be approximated in $X^{k,1}$ by functions which are in $C_0^\infty((0,1])$. Indeed, one can just convolute $f_n$ by some mollifier. Hence, the first equality of \eqref{density} holds. To prove a similar result for $V^\ast$, we take $g \in Y^{k,1}$ and fix some $c \in (0,1]$. For $\xi \in (0,c]$ , we have \begin{equation*} \begin{split} |\frac{g(\xi)}{\xi^k} | &= |- \int_\xi^c \partial_\xi(\frac{g(\xi')}{\xi^{'k}} ) d\xi' + \frac{g(c)}{c^k}| \\ & \leq \left(\int_\xi^c \frac{\phi(\xi')^{4k}}{ |\xi'|^{2k }} |\partial_\xi(\frac{g(\xi')}{\xi^{'k}} )|^2 d\xi' \int_\xi^c \frac{|\xi'|^{2k }}{ \phi(\xi')^{4k} } d\xi' \right)^{1/2} + |\frac{g(c)}{c^k}| \\ & \leq \ \epsilon \xi^{1-2k \over 2} + |\frac{g(c)}{c^k}| \end{split} \end{equation*} where $\epsilon = C \int_0^c \frac{\phi(\xi')^{4k}}{ |\xi'|^{2k }} |\partial_\xi(\frac{g(\xi')}{\xi^{'k}} )|^2 d\xi'$. By choosing $c$ small enough, we can make $\epsilon$ small. Hence, we deduce that for all $\epsilon > 0$, there exists a constant $C_\epsilon(g) = |\frac{g(c)}{c^k} | $ such that for all $\xi \in (0,1]$, we have \begin{equation*} |g(\xi)| \leq \epsilon \ \sqrt{\xi} + C_\epsilon \xi^k. \end{equation*} We denote $g_n = \chi_n g$, hence it is clear that $g_n$ goes to $g$ in $L_\xi^2$. Moreover, \begin{equation*} V^\ast(g -g_n) = (1-\chi_n) V^\ast (g) - n \chi'(n \xi)\frac{\phi^{2k}}{\xi^{2k}} g. \end{equation*} The second term on the right hand side satisfies \begin{equation*} \int_0^1 |n \chi'(n \xi)\frac{\phi^{2k}}{\xi^{2k}} g|^2 d\xi \leq C \int_0^1 ( \frac{\epsilon}{\xi} + C_\epsilon \xi^{2k-1}) 1_{\frac1{2n} \leq \xi \leq \frac1n } d\xi \end{equation*} Hence, since $\epsilon> 0$ is arbitrary, it goes to zero when $n$ goes to infinity. Therefore, we deduce that $g_n$ goes to $g$ in $Y^{k,1}$. Now, it is clear that $g_n$ can be approximated by functions which are in $C_0^\infty((0,1])$ and the second equality of \eqref{density} follows. \end{proof} \begin{lemma} (1) For $f\in X^{k,1},\;g\in Y^{k,1}$ \[ \int V(f)\cdot g d\xi =\int f\cdot V^{\ast}(g) d\xi \] (2) $\ker V=\{0\}$ and $\ker V^\ast = \{0\} $ (3) $V_t$ and $V_t^\ast$ are commutators of $V,V^\ast$ and $\partial_t$: \[ \partial_t V(f)=V(\partial_t f)+V_t(f),\;\; \partial_t V^{\ast}(g)=V^{\ast}(\partial_t g)+V_t^{\ast}(g) \] where \[ V_t(f) \equiv2k \frac{1}{\xi^{k}}\partial_\xi [\frac{\phi^{2k-1}\partial_t\phi}{\xi^{k}}f],\;\;V_t^{\ast}(g) \equiv-2k \frac{\phi^{2k-1}\partial_t\phi}{\xi^{k}}\partial_\xi[ \frac{1}{\xi^{k}} g] \] In addition, \[ V_t^{\ast}(g)= -2k\frac{\partial_t\phi}{\phi}V^\ast(g) \text{ and }\; VV_t^\ast (g)=V_tV^\ast (g). \] \end{lemma} \begin{proof} The proof of this lemma directly follows from the density property proved in the previous lemma. It will be used frequently during the energy estimates. \end{proof} The following lemma displays a key ingredient of the main estimates. Dividing by $\xi$ is a common operation embedded in the equations \eqref{euler} and \eqref{VVk}. The lemma claims that the operation, which acts like derivatives when $\xi$ approach $0$, is completely controlled by modified derivatives $V$ and $V^\ast$. \begin{lemma}\label{key} (1) If $f\in X^{k,1}$ and $g\in Y^{k,1}$, then $\frac{f}{\xi}$ and $\frac{g}{\xi}$ are bounded in $L^2_\xi$ and we obtain the following inequalities: \begin{equation}\label{gxi} \begin{split} ||\frac{f}{\xi}||_{L^2_\xi}\leq C ||\frac{\xi}{\phi}||_{L_\xi^\infty}^{2k} ||V f||_{L^2_\xi},\;\;||\frac{g}{\xi}||_{L_\xi^2}\leq C \{||\frac{\xi}{\phi}||_{L_\xi^\infty}^{2k} ||V^\ast g||_{L_\xi^2} \} \end{split} \end{equation} (2) More generally, if $f\in L^2_\xi$ satisfies $\frac{V(f)}{\xi^{m-1}} \in L^2_\xi $ for some nonnegative real number $m$, $m+k > \frac32$, then \begin{equation*} ||\frac{f}{\xi^m}||_{L_\xi^2}\leq C ||\frac{\xi}{\phi}||_{L_\xi^\infty}^{2k} ||\frac{V f}{\xi^{m-1}}||_{L_\xi^2}\,. \end{equation*} (3) Also, if $g$ satisfies $ \frac{g}{\xi^{m-1}} \in L^2_\xi $ and $\frac{V^\ast(g) }{\xi^{m-1}} \in L^2_\xi $ for some nonnegative real number $m$, $ m < k + \frac12 $, then \begin{equation} \label{gxim} ||\frac{g}{\xi^m}||_{L_\xi^2}\leq C ||\frac{\xi}{\phi}||_{L_\xi^\infty}^{2k} ||\frac{V^\ast g}{\xi^{m-1}}||_{L_\xi^2} + \| \frac{g}{\xi^{m-1}} \|_{L^2_\xi}\,. \end{equation} Here, $m$ is not necessarily an integer. In practice, $m$ will be chosen as $\frac{1}{2}<m\leq k$. \end{lemma} \begin{proof} The point (1) was already proved for $f$ in the previous lemma by Hardy inequality (see \eqref{Hardy}). For the second inequality, consider first $g \in C_0^\infty((0,1))$, hence \[ \int V^\ast g \cdot \frac{\xi^{2k}} {\phi^{2k}} \frac{g }{\xi}d\xi=-\int \partial_\xi(\frac{g}{\xi^{k}})\cdot \xi^{k}\frac{g}{\xi}d\xi=\frac{2k-1}{2} \int |\frac{g}{\xi}|^2d\xi + \frac12 |g(1)|^2\,. \] and $g(1) =0$. Since $k\neq \frac{1}{2}$ and \[ \int V^\ast g \cdot \frac{\xi^{2k}} {\phi^{2k}} \frac{g }{\xi}d\xi\leq ||\frac{\xi}{\phi}||_{L_\xi^\infty}^{2k}||V^\ast g||_{L_\xi^2}||\frac{g}{\xi}||_{L_\xi^2}, \] we obtain the desired result. The case where $g \in Y^{k,1}$ follows by density. Now, we concentrate on (2). For $\tilde \phi$ satisfying the same type of bounds as $\phi$, we define $\tilde V_\alpha (f) = \frac1{\xi^\alpha} \partial_\xi ( \frac{\tilde \phi^{2\alpha}}{\xi^\alpha} f ) . $ For $f$ as in (2), we use that \begin{equation*} \frac{V(f)}{\xi^{m-1}} = \frac1{\xi^{k+m-1}}\partial_\xi ( \frac{\phi^{2k}}{\xi^{k-m+1}} \frac{f}{\xi^{m-1}} ) = \tilde V_{k+m-1} ( \frac{f}{\xi^{m-1}} ) \end{equation*} where $\tilde \phi $ is given by $\tilde \phi^{2(k+m-1)} = \phi^{2k} \xi^{2(m-1)} $. Since, $k+m-1> \frac12$, we can apply the estimate \eqref{Hardy} for $V$ replaced by $\tilde V_{k+m-1}$ and $f$ replaced by $\frac{f}{\xi^{m-1}} $. This gives the desired bound. For (3), we write \begin{equation*} \frac{V^\ast(g)}{\xi^{m-1}} = - \frac{\phi^{2k}}{\xi^{2k}} \tilde V_{m-1-k} (\frac{g}{\xi^{m-1}}) \end{equation*} with $\tilde \phi =\xi$. Since, $m-1-k < -\frac12$, $\tilde V_{m-1-k} $ satisfies the same estimates as $V^\ast$, in particular the second estimate of \eqref{gxi} holds for $\tilde V_{m-1-k} $, hence \eqref{gxim} holds. \end{proof} \begin{remark} We note that the boundary conditions on $f$ and $g$ at $\xi=0$ in Lemma \ref{key} are imbedded in $X^{k,s}$ and $Y^{k,1}$, namely $L_\xi^2$ boundedness of $Vf$ and $V^\ast g $ forces $f$ and $g$ to vanish at $\xi=0$. Actually, it forces them to be less than $\xi^{1/2}$. \end{remark} As a direct result of Lemma \ref{key}, we obtain the following $L_\xi^2$ estimates for $\frac{f}{\xi^m}$ and $\frac{g}{\xi^m}$: \begin{corollary}\label{cor} For given nonnegative real number $m$, $m<k+\frac32$, if $f\in X^{k,\lceil m\rceil}$, then there exists $C_1$ only depending on $||\frac{\xi}{\phi}||_{L_\xi^\infty}$ so that \[ ||\frac{f}{\xi^m}||_{L_\xi^2}\leq C_1 ||f||_{X^{k, \lceil m\rceil}}\,. \] Also, for given nonnegative real number $m$, $m<k+\frac12$, if $g\in Y^{k,\lceil m\rceil}$, then there exists $C_2$ only depending on $||\frac{\xi}{\phi}||_{L_\xi^\infty}$ so that \[ ||\frac{g}{\xi^m}||_{L_\xi^2}\leq C_2 ||g||_{Y^{k, \lceil m\rceil}}\,. \] \end{corollary} \begin{proof} We apply (2) and (3) of Lemma \ref{key} alternatingly until negative powers of $\xi$ disappear. Note that the conditions on $m$ come from the one in (3) of Lemma \ref{key}. \end{proof} We next prove the Sobolev imbedding inequalities of $V,V^\ast$ version, which will be useful tools to control nonlinear terms. \begin{lemma}\label{sup} If $f\in X^{k, \lceil k\rceil +1}$ and $g\in Y^{k, \lceil k\rceil +1}$, then there exist constants $C_3$ and $C_4$ only depending on $||\frac{\xi}{\phi}||_{L_\xi^\infty}$ so that \begin{equation*} ||\frac{f}{\xi^k}||_{L_\xi^\infty}\leq C_3 ||f||_{X^{k, \lceil k\rceil +1}}\text{ and }||\frac{ g}{\xi^k}||_{L_\xi^\infty}\leq C_4 ||g||_{Y^{k, \lceil k\rceil +1}}\,. \end{equation*} \end{lemma} \begin{proof} We start with $g$ part. By the definition of $V$ and $V^\ast$, one finds that \[ \partial_\xi[ \frac{g}{\xi^k} ] = - \frac{\xi^{2k}}{\phi^{2k}} \frac{V^\ast g}{\xi^k} \] Thus, by Sobolev embedding theorem in one dimension, it suffices to show that \[ \frac{g}{\xi^k},\;\;\frac{V^\ast g}{\xi^k}\in L_\xi^2\,. \] This follows from Corollary \ref{cor}. Hence, we conclude that $L^\infty_\xi$ bound of $\frac{g}{\xi^k}$ is controlled by $\|g\|_{Y^{k, \lceil k\rceil +1}}$. For $f$ part, we show that $\|\frac{\phi^{2k}}{\xi^{2k}}\frac{f}{\xi^k}\|_{L^\infty_\xi}$ is bounded by $\|f\|_{X^{k, \lceil k\rceil +1}}$. Note that \[ \partial_\xi[\frac{\phi^{2k}}{\xi^{2k}}\frac{f}{\xi^k} ] = \frac{Vf}{\xi^k}-2k\frac{\phi^{2k}}{\xi^{2k}}\frac{f}{\xi^{k+1}}\,. \] Thus, by applying Corollary \ref{cor}, we obtain the desired conclusion. \end{proof} More generally, we obtain the following: \begin{lemma}\label{infty} Let $0\leq j < k-\frac12$ be a given nonnegative number. If $f\in X^{k,\lceil j\rceil+1}$ and $g\in Y^{k, \lceil j\rceil+1}$, then there exist constants $C_5$ and $C_6$ only depending on $||\frac{\xi}{\phi}||_{L_\xi^\infty}$ so that \begin{equation*} ||\frac{f}{\xi^j}||_{L_\xi^\infty}\leq C_5 ||f||_{X^{k,\lceil j\rceil+1}}\text{ and }||\frac{ g}{\xi^j}||_{L_\xi^\infty}\leq C_6 ||g||_{Y^{k, \lceil j\rceil+1}}\,. \end{equation*} \end{lemma} \begin{proof} We only treat $\frac{g}{\xi^j}$. Note that \[ \partial_\xi[\frac{g}{\xi^j}]=-\frac{\xi^{2k}}{\phi^{2k}} \frac{V^\ast g}{\xi^j}+(k-j)\frac{g}{\xi^{j+1}} \] Hence, by the Sobolev embedding theorem, it suffices to show that $\frac{g}{\xi^j} $, $ \frac{V^\ast g}{\xi^j}\text{ and }\frac{g}{\xi^{j+1}}$ are in $ L_\xi^2$. This follows from Corollary \ref{cor}. Note that $j$ has to be less than $k-\frac12$. \end{proof} Next we present the product rule for the operators $V,V^\ast$. \begin{lemma} \label{product rule} Let $f,\,g\in \mathcal{D} V\cap\mathcal{D} V^\ast$ be given. Let $h$ be a given smooth function. The following identities hold: \[ \begin{split} \bullet&\;V^\ast f=-Vf+2k\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi\phi\frac{f}{\phi} =-Vf+\frac{2k}{2k+1}\frac{V(\xi^k \phi)}{\xi^k}\frac{f}{\phi}\\ \bullet&\;Vg=-V^\ast g+2k\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi\phi\frac{g}{\phi} =-V^\ast g+\frac{2k}{2k+1}\frac{V(\xi^k \phi)}{\xi^k}\frac{g}{\phi}\\ \bullet&\; V(fh)=V(f)h+ f\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi h,\;\; V^\ast (gh)=V^\ast (g)h- g\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi h\\ \bullet&\;\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi h=Vh+k\frac{\phi^{2k}}{\xi^{2k}} (\frac{1}{\xi}-2\frac{\partial_\xi\phi}{\phi}) {h}=-V^\ast h+k\frac{\phi^{2k}}{\xi^{2k}} \frac{h}{\xi}\\ \bullet&\;\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi [fg]=V(f)g+\frac{\phi^{4k}}{\xi^{3k}} f\partial_\xi[\frac{\xi^k }{\phi^{2k}}g] =V(f) g-f V^\ast (g)+ 2k\frac{\phi^{2k}}{\xi^{2k}}(\frac{1}{\xi}-\frac{\partial_\xi\phi}{\phi})fg \end{split} \] \end{lemma} The lemma tells that when $V$ or $V^\ast$ act on a function $h$, depending on that function, they can yield $\frac{h}{\xi}$ or $ \frac{V(\xi^k\phi)}{\xi^k}\frac{h}{\phi}$ besides $Vh$ or $V^\ast h$. \subsection{Homogeneous operators $\overline{V},\overline{V}^\ast$} Next we introduce homogeneous linear operators $\overline{V}$ and $\overline{V}^\ast$ of $V$ and $V^\ast$ as follows: \begin{equation}\label{ho} \begin{split} \overline{V}(f)\equiv \frac{1}{\xi^{k}}\partial_\xi [\xi^kf],\;\;\; \overline{V}^\ast(g)\equiv-\xi^{k}\partial_\xi[ \frac{g}{\xi^{k}}] \ \hbox{and} \ g(\xi=1) = 0. \end{split} \end{equation} These homogeneous operators are special cases of $V$ and $V^\ast$ for which $\phi$ is simply taken as $\xi$. Function spaces ${\overline{X}}^{k,s}$ and ${\overline{Y}}^{k,s}$ for $s$ a given nonnegative integer are given as follows: \begin{equation}\label{XY1} \begin{split} {\overline{X}}^{k,s}&\equiv \{f\in L_\xi^2: (\overline{V})^i(f)\in L_\xi^2,\; 0\leq i\leq s\}\\ {\overline{Y}}^{k,s}&\equiv \{g\in L_\xi^2: (\overline{V}^\ast)^i (g)\in L_\xi^2,\;0\leq i\leq s\} \end{split} \end{equation} where $ (\overline{V})^i$ and $(\overline{V}^\ast)^i $ are defined as in \eqref{Vi} and \eqref{V*i}. These spaces are equipped with the following norms \[ ||f||^2_{{\overline{X}}^{k,s}}\equiv \sum_{i=0}^s ||(\overline{V})^i(f)||^2_{L^2_\xi}\;\text{ and }\;||g||^2_{{\overline{Y}}^{k,s}}\equiv \sum_{i=0}^s ||(\overline{V}^\ast)^i(g)||^2_{L^2_\xi}\,. \] Indeed, $\overline{V}$, $\overline{V}^\ast$ and $V$, $V^\ast$ share many good properties; we summarize the analog of Lemma \ref{key}, \ref{sup}, \ref{infty} and \ref{product rule} for $\overline{V}$, $\overline{V}^\ast$ in the following. \begin{lemma}\label{linear-prop} (1) If $f\in L^2_\xi$ satisfies $\frac{\overline{V}f}{\xi^{m-1}}\in L^2_\xi$ for some real number $m$, $m+k>\frac32$, then \[ ||\frac{f}{\xi^m}||_{L_\xi^2}^2\leq ||\frac{\overline{V}f}{\xi^{m-1}} ||_{L_\xi^2}^2 \] Also, if $g$ satisfies $\frac{ g}{\xi^{m-1}}\in L^2_\xi$ and $\frac{\overline{V}^\ast g}{\xi^{m-1}}\in L^2_\xi$ for some real number $m$, $m<k+\frac12$, then \[ ||\frac{g}{\xi^m}||_{L_\xi^2}^2\leq ||\frac{\overline{V}^\ast g}{\xi^{m-1}} ||_{L_\xi^2}^2+||\frac{g}{\xi^{m-1}}||_{L_\xi^2}^2\,. \] (2) If $f\in \overline{X}^{k, \lceil k\rceil +1}$ and $g\in \overline{Y}^{k, \lceil k\rceil +1}$, then we obtain \begin{equation*} ||\frac{f}{\xi^k}||_{L_\xi^\infty}\leq C ||f||_{\overline{X}^{k, \lceil k\rceil +1}}\text{ and }||\frac{ g}{\xi^k}||_{L_\xi^\infty}\leq C ||g||_{\overline{Y}^{k, \lceil k\rceil +1}}\,. \end{equation*} (3) Let $0\leq j < k-\frac12$ be a given nonnegative number. If $f\in \overline{X}^{k,\lceil j\rceil+1}$ and $g\in \overline{Y}^{k, \lceil j\rceil+1}$, then we obtain \begin{equation*} ||\frac{f}{\xi^j}||_{L_\xi^\infty}\leq C ||f||_{\overline{X}^{k,\lceil j\rceil+1}}\text{ and }||\frac{g}{\xi^j}||_{L_\xi^\infty}\leq C ||g||_{\overline{Y}^{k, \lceil j\rceil+1}}\,. \end{equation*} (4) Product rule for $\overline{V}$, $\overline{V}^\ast$: \[ \overline{V}^\ast f=-\overline{V}f+2k\frac{f}{\xi},\; \overline{V}(fh)=\overline{V}(f) h+f\partial_\xi h,\; \overline{V}^\ast(gh)=\overline{V}^\ast(g) h-g\partial_\xi h. \] \end{lemma} Now we show that two norms induced by $V,V^\ast$ and $\overline{V},\overline{V}^\ast$ are equivalent. \begin{proposition}\label{equiv e} Let $\phi$ be given so that $||\xi^k\phi||_{X^{k,\lceil k\rceil +2}} \leq A$ and $\frac{1}{C}<\frac{\xi}{\phi}<C$ for positive constants $A,\,C$. Then for any $f\in X^{k,\lceil k\rceil } $ and $g\in Y^{k,\lceil k\rceil}$, there exists a constant $M$ depending only on $A,\,C$ so that \begin{equation}\label{equiv} \begin{split} \frac{1}{M}|| f||_{\overline{X}^{k,\lceil k\rceil }}^2\leq || f||_{X^{k,\lceil k\rceil }}^2\leq M || f||_{\overline{X}^{k,\lceil k\rceil }}^2\,,\; \frac{1}{M}|| g||_{\overline{Y}^{k,\lceil k\rceil }}^2\leq || g||_{Y^{k,\lceil k\rceil }}^2\leq M|| g||_{\overline{Y}^{k,\lceil k\rceil }}^2\,. \end{split} \end{equation} \end{proposition} \begin{proof} We will only prove the second inequality ($\leq$). The other one can be shown in the same way. Let us start with $Vf$ and $V^\ast g$. \[ \begin{split} &Vf = \frac{1}{\xi^k}\partial_\xi[\frac{\phi^{2k}}{\xi^k}f]= \frac{\phi^{2k}}{\xi^{2k}}\overline{V}f+\partial_\xi [\frac{\phi^{2k}}{\xi^{2k}}]f=\frac{\phi^{2k}}{\xi^{2k}}\overline{V}f+ \{\frac{2k}{2k+1}\frac{V(\xi^k\phi)}{\xi^k\phi}-2k\frac{\phi^{2k}}{\xi^{2k+1}} \}f\\ & V^\ast g=-\frac{\phi^{2k}}{\xi^k}\partial_\xi[\frac{g}{\xi^k}] =\frac{\phi^{2k}}{\xi^{2k}}\overline{V}^\ast g \end{split} \] Next, note that by the product rule of Lemma \ref{product rule} and \ref{linear-prop}, the higher order terms $(V)^if$ and $(V^\ast)^ig$ can be expanded into the following form in terms of $(\overline{V})^jf$ and $(\overline{V}^\ast)^jg$ for $j\leq i$: \[ (V)^if=\sum_{j=0}^i \Psi^1_j(\xi^k\phi)\cdot(\overline{V})^{i-j}f,\;\; (V^\ast)^ig=\sum_{j=0}^{i-1} \Psi^2_j(\xi^k\phi)\cdot(\overline{V}^\ast)^{i-j}g \] where for $s=1$ or $2$ \[ \Psi_j^s(\xi^k\phi)\equiv\sum_{r=0}^j \{C_{rs}\frac{1}{\xi^r}\cdot \sum_{\substack{l_1+\cdots+l_p=j-r\\l_1,\dots, l_p\geq 1}}{C_{l_1\cdots l_ps}} \prod_{q=1}^{p} \frac{(V)^{l_q}(\xi^k\phi)}{\xi^k\phi}\} \] for some functions $C_{rs}$, $C_{l_1\cdots l_ps}$ which may only depend on $k$, $\frac{\phi}{\xi}$ and $\frac{\xi}{\phi}$ and therefore $C_{rs}$, $C_{l_1\cdots l_ps}$ are bounded by some power function of $C$. In order to show \eqref{equiv}, first we rewrite $(V)^if$ and $(V^\ast)^ig$ as \[ (V)^if=\sum_{j=0}^i \xi^j\Psi^1_j(\xi^k\phi)\cdot\frac{(\overline{V})^{i-j}f}{\xi^j},\;\; (V^\ast)^ig=\sum_{j=0}^{i-1} \xi^j\Psi^2_j(\xi^k\phi)\cdot\frac{(\overline{V}^\ast)^{i-j}g}{\xi^j} \] Since $||\frac{(\overline{V})^{i-j}f}{\xi^j}||_{L^2_\xi}$ and $||\frac{(\overline{V}^\ast)^{i-j}g}{\xi^j}||_{L^2_\xi}$ are bounded by $||f||_{\overline{X}^{k,j}}$ and $||g||_{\overline{Y}^{k,j}}$ respectively and moreover, by Lemma \ref{infty}, $||\xi^j\Psi^s_j||_{L^\infty_\xi}$ is bounded by $||\xi^k\phi||_{X^{k,j+2}}$ for $j\leq \lceil k\rceil$, by adding all the inequalities over $i\leq\lceil k\rceil$ we obtain the desired inequality as well as the desired bound $M$ as a function of $A$ and $C$. \end{proof} The above equivalence dictates the linear character of higher order energy. Note that we cannot deduce the same result for the full energy $\mathcal{E}^{k,\lceil k\rceil +3}$ due to the nonlinearity. \section{$V,V^\ast$ a priori energy estimates}\label{4} This section is devoted to $V,V^\ast$ energy estimates to get the following a priori estimates, a key to construct strong solutions. \begin{proposition}\label{apriori} Suppose $\phi$ and $u$ solve \eqref{VVk} and \eqref{BC} with $\mathcal{E}^{k,\lceil k\rceil+3}(\phi,u) <\infty$. If we further assume that \begin{equation} \frac{1}{C}\leq \frac{\phi}{\xi}\leq C\text{ for some }C>1\,,\label{AA} \end{equation} we obtain the following a priori estimates: \begin{equation*} \frac{d}{dt} \mathcal{E}^{k,\lceil k\rceil+3}(\phi,u)\leq \mathcal{C}(\mathcal{E}^{k,\lceil k\rceil+3}(\phi,u)) \end{equation*} where $\mathcal{C}(\mathcal{E}^{k,\lceil k\rceil+3}(\phi,u))$ is a continuous function of $\mathcal{E}^{k,\lceil k\rceil+3}(\phi,u)$ and $C$. Moreover, the a priori assumption \eqref{AA} can be justified: the boundedness of $\mathcal{E}^{k,\lceil k \rceil +3}(\phi,u)$ imply the boundedness of $\frac{\phi}{\xi}$. \end{proposition} In order to illustrate the idea of of the proof, we start with the simplest case when $k=1$ of which corresponding $\gamma$ is 3. In the next subsections, we generalize it to arbitrary $k>\frac{1}{2}$. \subsection{A priori estimates for $k=1$ $(\gamma=3)$} When $k=1$, the Euler equations read as follows: \begin{equation} \begin{split} \phi_t+(\frac{\phi}{\xi})^{2}u_{\xi}&=0\\ u_t+(\frac{\phi}{\xi})^{2}\phi_{\xi}&=0\label{euler1} \end{split} \end{equation} The operators $V$ and $V^{\ast}$ take the form: \begin{equation*} \begin{split} V(f)\equiv \frac{1}{\xi}\partial_\xi (\frac{\phi^{2}}{\xi}f),\;\;\; V^{\ast}(g)\equiv-\frac{\phi^{2}}{\xi}\partial_\xi( \frac{1}{\xi} g) \end{split} \end{equation*} In terms of $V$ and $V^{\ast}$, (\ref{euler1}) can be rewritten as follows: \begin{equation} \begin{split} &\partial_t(\xi\phi) -V^{\ast}(\xi u)=0\\ &\partial_t(\xi u) +\frac{1}{3}V(\xi\phi)=0 \label{VV} \end{split} \end{equation} The energy functional $\mathcal{E}^{1,4}(\phi,u)$ reads as the following: \begin{equation} \mathcal{E}^{1,4}(\phi,u)\equiv \int\frac{1}{3}|\xi\phi|^2 +|\xi u|^2d\xi+ \sum_{i=1}^4\int\frac{1}{9}|(V)^i(\xi\phi)|^2 +|(V^\ast)^i(\xi u)|^2d\xi \label{ef} \end{equation} Before carrying out the energy estimates, we verify the assumption \eqref{AA}. In order to do so, we examine each term in the energy functional \eqref{ef}. Let us start with $V$, $V^{\ast}V$, $VV^{\ast}V$, $V^\ast VV^{\ast}V$ of $\xi \phi$. \begin{align*} \begin{split} &\bullet\; V(\xi\phi)= \frac{1}{\xi}\partial_\xi [\phi^{3}]=3 \frac{\phi^{2}} {\xi}\partial_\xi\phi\\ &\bullet\; V^{\ast}V(\xi\phi)=-3 \frac{\phi^2} {\xi}\partial_\xi [\frac{\phi^2}{\xi^2}\partial_\xi\phi] \\ &\bullet\; VV^{\ast}V(\xi\phi)=- \frac{3}{\xi} \partial_\xi[ \frac{\phi^{4}} {\xi^{2}}\partial_\xi [\frac{\phi^2}{\xi^{2}}\partial_\xi \phi]]\\ &\bullet\; V^{\ast}VV^{\ast}V(\xi\phi)= 3\frac{\phi^{2}} {\xi}\partial_\xi[ \frac{1}{\xi^2} \partial_\xi[ \frac{\phi^{4}} {\xi^{2}}\partial_\xi [\frac{\phi^2}{\xi^{2}}\partial_\xi\phi]]] \end{split} \end{align*} One advantage of $V$ and $V^\ast$ formulation is that $L_\xi^\infty$ control of $\phi$ is cheap to get: since \[ \int \xi\phi\cdot V(\xi\phi) d\xi=\int \phi\partial_\xi[\phi^{3}]d\xi= \frac{3}{4}\phi^{4}, \] we deduce that \begin{equation} ||\phi||_{L_\xi^\infty}^4\leq ||\xi\phi||_{L_\xi^2}^2+ ||V(\xi\phi)||_{L_\xi^2}^2\leq \mathcal{E}^{1,4}(\phi,u)\label{supphi} \end{equation} The boundedness of $\partial_\xi\phi$ also follows from the boundedness of $\mathcal{E}^{1,4}(\phi,u)$: first note that \[ \frac{\phi^2}{\xi^2}\partial_\xi\phi=\frac{1}{3}\frac{V(\xi\phi)}{\xi} \] and then by applying Lemma \ref{sup} to $g=V(\xi\phi)$ when $k=1$, we deduce that $\frac{\phi^2}{\xi^2}\partial_\xi\phi$ is bounded and continuous, and in turn $\partial_\xi\phi$ is in $L_\xi^\infty$ under the assumption \eqref{AA}. In the same way, by applying Lemma \ref{sup} to $f=V^\ast V(\xi\phi)$ when $k=1$, we can derive that $\partial_\xi[\frac{\phi^2}{\xi^2}\partial_\xi\phi]$ is bounded. However, we remark that this does not imply that $\partial_\xi^2\phi$ is bounded in our energy space, since it is not clear how to control $\partial_\xi[\frac{\phi}{\xi}]$. Thus we keep the form as they are rather than try to go back to the standard Sobolev space. Next we turn to $u$ variable. We list out $V^\ast$, $VV^{\ast}$, $V^\ast VV^{\ast}$, $VV^\ast VV^{\ast}$ of $\xi u$. \begin{align} \begin{split} &\bullet\; V^\ast (\xi u)=-\frac{\phi^{2}} {\xi}\partial_\xi u\\ &\bullet\; VV^{\ast}(\xi u)=-\frac{1}{\xi}\partial_\xi[\frac{\phi^4}{\xi^2} \partial_\xi u]\\ &\bullet\; V^{\ast}VV^\ast (\xi u)=\frac{\phi^2}{\xi}\partial_\xi [\frac{1}{\xi^2}\partial_\xi[\frac{\phi^4}{\xi^2} \partial_\xi u]]\\ &\bullet\; VV^{\ast}VV^\ast (\xi u)=\frac{1}{\xi}\partial_\xi[\frac{\phi^4}{\xi^2}\partial_\xi [\frac{1}{\xi^2}\partial_\xi[\frac{\phi^4}{\xi^2} \partial_\xi u]]] \end{split}\label{u} \end{align} Note that $\frac{\partial_t\phi}{\xi}$ can be estimated in terms of $\partial_\xi u$ via the equation \eqref{euler1}: \begin{equation*} {\partial_t\phi}=-\frac{\phi^{2}} {\xi^2}\partial_\xi u=\frac{V^\ast (\xi u)}{\xi} \end{equation*} Apply Lemma \ref{sup} to deduce that $\frac{VV^\ast(\xi u)}{\xi}=\frac{1}{\xi^2}\partial_\xi[\phi^2\partial_t\phi]$ is bounded and continuous if $\mathcal{E}^{1,4}(\phi,u)$ is bounded. Letting $h$ be $\frac{1}{\xi^2}\partial_\xi[\phi^2\partial_t\phi]$, we can write $\phi^2{\partial_t\phi}=\int_0^\xi\xi'^2h d\xi$, and therefore we conclude $\frac{\partial_t\phi}{\xi}$ is also bounded by $\mathcal{E}^{1,4}(\phi,u)$ and $C$. Writing $\frac{\phi}{\xi}$ as \[ \frac{\phi}{\xi}(t,\xi)=\frac{\phi}{\xi}(0,\xi)+\int_0^t \frac{\partial_t\phi}{\xi}(\tau,\xi) d\tau\,, \] we conclude that for a short time, the boundary behavior of $\frac{\phi}{\xi}$ is preserved and in particular, this justifies the assumption \eqref{AA}. We now perform the energy estimates. The zeroth order energy energy is conserved as given by \eqref{0}. Apply $V$ and $V^{\ast}$ to \eqref{VV} and use $V_t(\xi\phi)=\frac{2}{3} \partial_tV(\xi\phi)$ \begin{equation} \begin{split} &\partial_tV(\xi\phi) -3VV^{\ast}(\xi u)=0\\ &\partial_tV^{\ast}(\xi u) +\frac{1}{3}V^{\ast}V(\xi \phi)=V_t^{\ast}(\xi u) \label{VVV} \end{split} \end{equation} Multiply by $\frac{1}{9}V(\xi\phi)$ and $V^{\ast}(\xi u)$ and integrate to get \begin{equation} \frac{1}{2}\frac{d}{dt}\int\frac{1}{9} |V(\xi\phi)|^2 +|V^{\ast}(\xi u)|^2d\xi=\int V_t^{\ast}(\xi u) \cdot V^{\ast}(\xi u)d\xi\label{e1} \end{equation} Apply $V^{\ast}$ and $V$ to \eqref{VVV} to get \begin{equation}\label{V2} \begin{split} &\partial_tV^{\ast}V(\xi\phi) -3V^{\ast}VV^{\ast}(\xi u)=V_t^{\ast} V(\xi\phi)\\ &\partial_tVV^{\ast}(\xi u) +\frac{1}{3}VV^{\ast}V(\xi \phi)=VV_t^{\ast}(\xi u)+V_t V^{\ast}(\xi u)=2V_t V^{\ast}(\xi u) \end{split} \end{equation} Multiply by $\frac{1}{9}V^\ast V(\xi\phi)$ and $VV^{\ast}(\xi u)$ and integrate to get \begin{equation} \begin{split} \frac{1}{2}\frac{d}{dt}\int\frac{1}{9} |V^\ast V(\xi\phi)|^2 +|VV^{\ast}(\xi u)|^2d\xi=\int \frac{1}{9} V_t^{\ast} V(\xi\phi) \cdot V^{\ast}V(\xi\phi)d\xi\\+\int 2V_t V^{\ast}(\xi u) \cdot VV^{\ast}(\xi u)d\xi\label{e2} \end{split} \end{equation} Apply $V^{\ast}$ and $V$ to \eqref{V2} to get \begin{equation}\label{V3} \begin{split} &\partial_tVV^{\ast}V(\xi\phi) -3VV^{\ast}VV^{\ast}(\xi u)=VV_t^{\ast} V(\xi\phi)+V_tV^\ast V(\xi\phi)=2V_tV^\ast V(\xi\phi)\\ &\partial_tV^{\ast}VV^{\ast}(\xi u) +\frac{1}{3}V^{\ast}VV^{\ast}V(\xi \phi)=2V^{\ast}V_t V^{\ast}(\xi u)+V_t^\ast VV^\ast (\xi u) \end{split} \end{equation} Multiply by $\frac{1}{9}VV^\ast V(\xi\phi)$ and $V^\ast VV^{\ast}(\xi u)$ and integrate to get \begin{equation} \begin{split} \frac{1}{2}\frac{d}{dt}\int\frac{1}{9} |VV^\ast V(\xi\phi)|^2 +|V^\ast VV^{\ast}(\xi u)|^2d\xi=\int \frac{2}{9} V_tV^\ast V(\xi\phi) \cdot VV^{\ast}V(\xi\phi)d\xi\\ +\int 2V^{\ast}V_t V^{\ast}(\xi u) \cdot V^{\ast}VV^{\ast}(\xi u)d\xi+\int V_t^\ast VV^\ast (\xi u)\cdot V^{\ast}VV^{\ast}(\xi u)d\xi \end{split}\label{e3} \end{equation} Apply $V^{\ast}$ and $V$ to \eqref{V3} to get \begin{equation}\label{V4} \begin{split} &\partial_tV^\ast VV^{\ast}V(\xi\phi) -3V^\ast VV^{\ast}VV^{\ast}(\xi u)= 2V^\ast V_tV^\ast V(\xi\phi)+V_t^\ast VV^{\ast}V(\xi\phi)\\ &\partial_tVV^{\ast}VV^{\ast}(\xi u) +\frac{1}{3}VV^{\ast}VV^{\ast}V(\xi \phi)=2VV^{\ast}V_t V^{\ast}(\xi u)+2V_tV^\ast VV^\ast (\xi u) \end{split} \end{equation} Multiply by $\frac{1}{9}V^\ast VV^\ast V(\xi\phi)$ and $VV^\ast VV^{\ast}(\xi u)$ and integrate to get \begin{equation} \begin{split} &\frac{1}{2}\frac{d}{dt}\int\frac{1}{9} |V^\ast VV^\ast V(\xi\phi)|^2 +|VV^\ast VV^{\ast}(\xi u)|^2d\xi\\ &=\int \frac{2}{9} V^\ast V_tV^\ast V(\xi\phi) \cdot V^\ast VV^{\ast}V(\xi\phi)d\xi +\int \frac{1}{9}V_t^\ast VV^{\ast}V(\xi\phi)\cdot V^\ast VV^{\ast}V(\xi\phi)d\xi\\&\;\;+\int 2VV^{\ast}V_t V^{\ast}(\xi u)\cdot VV^{\ast}VV^{\ast}(\xi u)d\xi +\int 2V_tV^\ast VV^\ast (\xi u) \cdot VV^{\ast}VV^{\ast}(\xi u)d\xi \end{split}\label{e4} \end{equation} In order to prove Proposition \ref{apriori}, it now remains to estimate the following nonlinear terms coming from the energy estimates in terms of the energy functional \eqref{ef}: \begin{equation} \begin{split} \int 2V_t V^{\ast}(\xi u) \cdot VV^{\ast}(\xi u)d\xi,\; \int \frac{2}{9} V_tV^\ast V(\xi\phi) \cdot VV^{\ast}V(\xi\phi)d\xi,\\ \int 2V^{\ast}V_t V^{\ast}(\xi u) \cdot V^{\ast}VV^{\ast}(\xi u)d\xi,\;\int \frac{2}{9} V^\ast V_tV^\ast V(\xi\phi) \cdot V^\ast VV^{\ast}V(\xi\phi)d\xi, \\ \int 2VV^{\ast}V_t V^{\ast}(\xi u)\cdot VV^{\ast}VV^{\ast}(\xi u)d\xi, \;\int 2V_tV^\ast VV^\ast (\xi u) \cdot VV^{\ast}VV^{\ast}(\xi u)d\xi \end{split}\label{mixed} \end{equation} Our goal is to control these terms by our energy functional \eqref{ef}. All of them contain $\partial_t\phi$ and its derivatives with suitable weights. The estimates of $\partial_t\phi$ related terms can be obtained through the equation \eqref{euler1} by estimating $V^\ast,\;VV^\ast,\; V^\ast VV^\ast,\; VV^\ast VV^\ast$ of $\xi u$ in terms of $\partial_t\phi$. Before going any further, let us try to get the better understanding of them. First, we rewrite $V^\ast,\;VV^\ast,\; V^\ast VV^\ast,\; VV^\ast VV^\ast$ of $\xi u$, namely \eqref{u}, in terms of $\partial_t\phi$: \begin{equation} \begin{split} &\bullet\; V^\ast (\xi u)=-\frac{\phi^2}{\xi}\partial_\xi u=\xi \partial_t\phi\\ &\bullet\; VV^{\ast}(\xi u)=\frac{1}{\xi}\partial_\xi[\phi^2\partial_t \phi]=\frac{\phi^3}{\xi}\partial_\xi[\frac{\partial_t\phi}{\phi}] +3\frac{\phi}{\xi}\partial_\xi\phi\partial_t\phi= \boxed{\frac{\phi^3}{\xi}\partial_\xi[\frac{\partial_t\phi}{\phi}]}+V(\xi\phi) \frac{\partial_t\phi}{\phi}\\ &\bullet\; V^{\ast}VV^\ast (\xi u)=-\frac{\phi^2}{\xi}\partial_\xi [\frac{\phi^3}{\xi^2}\partial_\xi[\frac{\partial_t\phi}{\phi}]]-3 \frac{\phi^2}{\xi}\partial_\xi[\frac{\phi^2}{\xi^2}\partial_\xi\phi] \frac{\partial_t\phi}{\phi}-3 \frac{\phi^2}{\xi}\frac{\phi^2}{\xi^2}\partial_\xi\phi \partial_\xi[ \frac{\partial_t\phi}{\phi}]\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= -\frac{1}{\xi\phi}\partial_\xi [\frac{\phi^6}{\xi^2}\partial_\xi[\frac{\partial_t\phi}{\phi}]]-3 \frac{\phi^2}{\xi}\partial_\xi[\frac{\phi^2}{\xi^2}\partial_\xi\phi] \frac{\partial_t\phi}{\phi}\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= \boxed{-\frac{1}{\xi\phi}\partial_\xi [\frac{\phi^6}{\xi^2}\partial_\xi[\frac{\partial_t\phi}{\phi}]]} +V^\ast V(\xi\phi)\frac{\partial_t\phi}{\phi}\label{tphi}\\ &\bullet\; VV^{\ast}VV^\ast (\xi u)=-\frac{1}{\xi}\partial_\xi[\frac{\phi}{\xi^2}\partial_\xi [\frac{\phi^6}{\xi^2}\partial_\xi[\frac{\partial_t\phi}{\phi}]]]-3\frac{1}{\xi} \partial_\xi[\frac{\phi^4}{\xi^2}\partial_\xi[\frac{\phi^2}{\xi^2}\partial_\xi\phi] \frac{\partial_t\phi}{\phi}]\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; =\boxed{-\frac{1}{\xi}\partial_\xi[\frac{\phi}{\xi^2}\partial_\xi [\frac{\phi^6}{\xi^2}\partial_\xi[\frac{\partial_t\phi}{\phi}]]]} +VV^\ast V(\xi\phi)\frac{\partial_t\phi}{\phi}+\frac{V^\ast V(\xi\phi)}{\xi}\cdot\frac{\phi^2}{\xi}\partial_\xi [\frac{\partial_t\phi}{\phi}] \end{split} \end{equation} As we can see in the above, the term $\frac{\partial_t\phi}{\phi}$ and its derivatives naturally appear and there are many ways to write them. The key idea is not to separate them randomly when distributing spatial derivatives, but to find the right form of each term. The boxed terms in \eqref{tphi} have been chosen in such a way that the remaining terms in the right hand sides have the better or the same integrability as the left hand sides. We analyze the most intriguing full derivative terms $V^\ast V_tV^\ast V(\xi\phi)$ and $VV^\ast V_tV^\ast (\xi u)$. Other terms can be handled in a rather direct way. \begin{claim} $$||V^\ast V_tV^\ast V(\xi\phi)||_{L^2_\xi}^2\leq \mathcal{C}_1(\mathcal{E}^{1,4}(\phi,u))$$ where $\mathcal{C}_1(\mathcal{E}^{1,4}(\phi,u))$ is a continuous function of $\mathcal{E}^{1,4}(\phi,u)$. \end{claim} \begin{proof} \[ \begin{split} \frac{1}{6}V^\ast V_tV^\ast V(\xi\phi)&=\frac{\phi^2}{\xi}\partial_\xi [\frac{1}{\xi^2} \partial_\xi[\frac{\partial_t\phi}{\phi}\frac{\phi^4}{\xi^2} \partial_\xi[\frac{\phi^2}{\xi^2} \partial_\xi\phi]]]\\ &={\frac{\phi^2}{\xi}\partial_\xi [\frac{1}{\xi^2}\frac{\partial_t\phi}{\phi} \partial_\xi[\frac{\phi^4}{\xi^2} \partial_\xi[\frac{\phi^2}{\xi^2} \partial_\xi\phi]]]}+ {\frac{\phi^2}{\xi}\partial_\xi [\frac{1}{\xi^2} \partial_\xi[\frac{\partial_t\phi}{\phi}]\frac{\phi^4}{\xi^2} \partial_\xi[\frac{\phi^2}{\xi^2} \partial_\xi\phi]]}\\ &=\frac{\partial_t\phi}{\phi}\frac{\phi^2}{\xi}\partial_\xi [\frac{1}{\xi^2} \partial_\xi[\frac{\phi^4}{\xi^2} \partial_\xi[\frac{\phi^2}{\xi^2} \partial_\xi\phi]]]+2\frac{\phi^2}{\xi}\partial_\xi [\frac{\partial_t\phi}{\phi}] \cdot\frac{1}{\xi^2} \partial_\xi[\frac{\phi^4}{\xi^2} \partial_\xi[\frac{\phi^2}{\xi^2} \partial_\xi\phi]]\\ &\;\;\;+\frac{\phi^2}{\xi}\partial_\xi[\frac{1}{\xi^2} \partial_\xi[\frac{\partial_t\phi}{\phi}]]\cdot\frac{\phi^4}{\xi^2} \partial_\xi[\frac{\phi^2}{\xi^2} \partial_\xi\phi]\\ &\equiv (I)+(II)+(III) \end{split} \] Since $(I)= \frac{\partial_t\phi}{\phi}V^\ast VV^\ast V(\xi\phi)$, $(I)$ is controllable: \[ ||(I)||_{L^2_\xi}\leq ||\frac{\partial_t\phi}{\phi}||_{L^\infty_\xi} ||V^\ast VV^\ast V(\xi\phi)||_{L^2_\xi} \] The second term is written as \[ (II)= -\frac{2}{3}\frac{\phi^2}{\xi}\partial_\xi [\frac{\partial_t\phi}{\phi}]\cdot \frac{VV^\ast V(\xi\phi)}{\xi} \] From \eqref{tphi}, we get \[ \frac{\phi^2}{\xi}\partial_\xi [\frac{\partial_t\phi}{\phi}]=\frac{VV^\ast (\xi u)}{\phi} -\frac{V(\xi\phi)}{\phi}\frac{\partial_t\phi}{\phi}\;\in\; L_\xi^\infty. \] And since $ ||\frac{VV^\ast V(\xi\phi)}{\xi}||_{L^2_\xi}\leq ||V^\ast VV^\ast V(\xi\phi)||_{L^2_\xi}$, $(II)$ is also controlled by the energy. Now we turn into $(III)$. First, we note that by Lemma \ref{key} and Lemma \ref{sup}, \[ \frac{1}{\xi}\partial_\xi[\frac{\phi^2}{\xi^2} \partial_\xi\phi]\;\in\;L_\xi^2 \;\text{ and }\;\partial_\xi[\frac{\phi^2}{\xi^2} \partial_\xi\phi]\;\in\; L_\xi^\infty. \] Next let us take a look at the other factor and rewrite it by using boxed terms in \eqref{tphi}: \[ \begin{split} \frac{\phi^6}{\xi^2}\partial_\xi[\frac{1}{\xi^2} \partial_\xi[\frac{\partial_t\phi}{\phi}]]= \frac{\phi^6}{\xi^2}\partial_\xi[\frac{1}{\phi^6}\frac{\phi^6}{\xi^2} \partial_\xi[\frac{\partial_t\phi}{\phi}]]= \frac{1}{\xi^2}\partial_\xi[\frac{\phi^6}{\xi^2} \partial_\xi[\frac{\partial_t\phi}{\phi}]]-6\frac{\phi^2}{\xi^2} \partial_\xi\phi\cdot\frac{\phi^3}{\xi^2} \partial_\xi[\frac{\partial_t\phi}{\phi}]\\ =-\frac{\phi}{\xi} V^\ast VV^\ast(\xi u)+ \frac{V^\ast V(\xi\phi)}{\xi} \frac{V^\ast (\xi u)}{\xi} -2\frac{V(\xi\phi)}{\xi}\frac{VV^\ast (\xi u)}{\xi}+2\frac{\xi}{\phi} |\frac{V(\xi\phi)}{\xi}|^2\frac{V^\ast (\xi u)}{\xi^2} \end{split} \] Thus $(III)$ can be rewritten as follows: \[ \begin{split} (III)=&-\frac{\phi}{\xi} \frac{V^\ast VV^\ast(\xi u)}{\xi}\cdot \partial_\xi[\frac{\phi^2}{\xi^2} \partial_\xi\phi]-2\frac{V(\xi\phi)}{\xi}\frac{VV^\ast (\xi u)}{\xi} \cdot\frac{1}{\xi}\partial_\xi[\frac{\phi^2}{\xi^2} \partial_\xi\phi]\\ &+\frac{V^\ast V(\xi\phi)}{\xi} \frac{V^\ast (\xi u)}{\xi}\cdot\frac{1}{\xi}\partial_\xi[\frac{\phi^2}{\xi^2} \partial_\xi\phi]+2\frac{\xi}{\phi} |\frac{V(\xi\phi)}{\xi}|^2\frac{V^\ast (\xi u)}{\xi^2}\cdot\frac{1}{\xi}\partial_\xi[\frac{\phi^2}{\xi^2} \partial_\xi\phi] \end{split} \] Hence $(III)$ can be controlled by $\mathcal{E}^{1,4}(\phi, u)$. \end{proof} Now let us move onto $VV^\ast V_tV^\ast (\xi u)$. The treatment of this term contains another flavor. \begin{claim} $$|| VV^\ast V_tV^\ast (\xi u)||_{L_\xi^2}^2\leq \mathcal{C}_2(\mathcal{E}^{1,4}(\phi,u))$$ where $\mathcal{C}_2(\mathcal{E}^{1,4}(\phi,u))$ is a continuous function of $\mathcal{E}^{1,4}(\phi,u)$. \end{claim} \begin{proof} We use the continuity equation in \eqref{euler1} first to deal with $V_tV^\ast (\xi u)$. Since $V^\ast (\xi u)=\xi\partial_t\phi$ and $V(\xi\partial_t\phi)=\frac{1}{\xi}\partial_\xi [\phi^2\partial_t\phi]=\frac{1}{\xi}[\phi^2\partial_\xi\partial_t\phi +2\phi\partial_\xi\phi\partial_t\phi]$, we can write $V_t V^\ast (\xi u)$ as the following: \[ \frac{1}{2}V_t V^\ast (\xi u)=\frac{1}{\xi}\partial_\xi [\phi |\partial_t\phi|^2]=\frac{1}{\xi}[\partial_\xi\phi|\partial_t\phi|^2 +2\phi\partial_t\partial_\xi\phi\partial_t\phi]=2\frac{\partial_t\phi}{\phi} V(\xi\partial_t\phi)-3\frac{\partial_\xi\phi |\partial_t\phi|^2}{\xi} \] Apply $V^\ast$: \[ \begin{split} &\frac{1}{2}V^\ast V_t V^\ast (\xi u)=-\frac{\phi^2}{\xi}\partial_\xi [2\frac{\partial_t\phi}{\phi} \frac{VV^\ast (\xi u)}{\xi}-3\frac{\partial_\xi\phi |\partial_t\phi|^2}{\xi^2}]\\ &=2\frac{\partial_t\phi}{\phi}V^\ast V V^\ast (\xi u)-2 \frac{\phi^2}{\xi}\partial_\xi[\frac{\partial_t\phi}{\phi}]\cdot\frac{VV^\ast (\xi u)}{\xi}+3\frac{\phi^2}{\xi}\partial_\xi [\frac{\phi^2}{\xi^2}\partial_\xi\phi |\frac{\partial_t\phi}{\phi}|^2]\\ &=2\frac{\partial_t\phi}{\phi}V^\ast V V^\ast (\xi u)-2 \frac{\phi^2}{\xi}\partial_\xi[\frac{\partial_t\phi}{\phi}]\underbrace{\{ \frac{VV^\ast (\xi u)}{\xi}-3\frac{\partial_t\phi}{\phi}\frac{\phi^2}{\xi^2}\partial_\xi\phi\}} _{(\star)} +3\frac{\phi^2}{\xi}\partial_\xi[\frac{\phi^2}{\xi^2}\partial_\xi\phi] |\frac{\partial_t\phi}{\phi}|^2 \end{split} \] Note that $(\star)$ reduces to: \[ (\star)=\frac{1}{\xi^2}\partial_\xi[\phi^3\frac{\partial_t\phi}{\phi}]- 3\frac{\partial_t\phi}{\phi}\frac{\phi^2}{\xi^2}\partial_\xi\phi =\frac{\phi^3}{\xi^2}\partial_\xi[\frac{\partial_t\phi}{\phi}] \] Apply $V$: \[ \begin{split} &\frac{1}{2}VV^\ast V_t V^\ast (\xi u)\\ &=2\frac{1}{\xi}\partial_\xi [\frac{\phi^2}{\xi}\frac{\partial_t\phi}{\phi}V^\ast V V^\ast (\xi u)]-2\frac{1}{\xi}\partial_\xi [\frac{1}{\phi^5}|\frac{\phi^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi}{\phi}]|^2]+3\frac{1}{\xi}\partial_\xi [\frac{\phi^4}{\xi^2}\partial_\xi[\frac{\phi^2}{\xi^2}\partial_\xi\phi] |\frac{\partial_t\phi}{\phi}|^2]\\ &=(I)+(II)+(III) \end{split} \] We rewrite $(I),\;(II),\;(III)$ as follows: \[ \begin{split} (I)&=2\frac{\partial_t\phi}{\phi}VV^\ast VV^\ast (\xi u)+ 2\frac{\phi^2}{\xi}\partial_\xi [\frac{\partial_t\phi}{\phi}]\frac{V^\ast VV^\ast (\xi u)}{\xi}\\ (II)&=10\frac{\partial_\xi\phi}{\xi\phi^6}|\frac{\phi^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi}{\phi}]|^2 -4\frac{1}{\xi\phi^5}\frac{\phi^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi}{\phi}]\cdot \partial_\xi[\frac{\phi^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi}{\phi}]]\\ (III)&= 3\frac{1}{\xi}\partial_\xi [\frac{\phi^4}{\xi^2}\partial_\xi[\frac{\phi^2}{\xi^2}\partial_\xi\phi]] |\frac{\partial_t\phi}{\phi}|^2+6\frac{1}{\xi} \frac{\phi^4}{\xi^2}\partial_\xi[\frac{\phi^2}{\xi^2}\partial_\xi\phi] \cdot\frac{\partial_t\phi}{\phi}\partial_\xi[\frac{\partial_t\phi}{\phi}] \end{split} \] It is easy to see that $(I)\text{ and }(III)$ can be controlled by the energy functional. On the other hand, in order to take care of $(II)$, a special attention is needed. Since $\frac{V^\ast VV^\ast(\xi u)}{\xi}$ is bounded by $VV^\ast VV^\ast(\xi u)$ in $L_\xi^2$, we obtain \begin{equation} h\equiv \frac{1}{\xi^3}\partial_\xi[\frac{\phi^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi}{\phi}]]\in L_\xi^2\label{h}\,. \end{equation} Thus the second term in $(II)$ is bounded by the energy functional. To prove that the first term is also bounded, we claim \[ \frac{1}{\xi^7}|\frac{\phi^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi}{\phi}]|^2\in L_\xi^2\,. \] By using \eqref{h}, rewrite $\frac{\phi^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi}{\phi}]$ as follows: \[ \frac{\phi^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi}{\phi}]=\int_0^\xi\zeta^3 h d\zeta \] Applying H$\ddot{\text{o}}$lder inequality, we observe that \[ |\frac{\phi^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi}{\phi}]|^2\leq \xi^7 ||h||_{L_\xi^2}^2\,. \] Hence we get \[ ||\frac{1}{\xi^7}|\frac{\phi^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi}{\phi}]|^2||_{L_\xi^2}\leq ||h||_{L_\xi^2}^2\,. \] This finishes the a priori estimates for $k=1$ as well as the claim. \end{proof} \subsection{The case when $\frac{1}{2}<k<1$ ($\gamma>3$)} In this subsection, we prove Proposition \ref{apriori} for the case $\frac{1}{2}<k<1$ and $s=4$. First, by Lemma \ref{sup}, one finds that $\partial_\xi\phi$, $\frac{\xi}{\phi}$, and $\frac{\partial_t\phi}{\xi}$ are bounded by the energy functional $\mathcal{E}^{k,4}(\phi,u)$. We recall the reformulated Euler equations \eqref{VVk}. One can apply $V,V^\ast$ alternatingly as in the case $k=1$, and integrate to get \[ \frac{1}{2}\frac{d}{dt}\sum_{i=1}^4\int\frac{1}{(2k+1)^2} |(V)^i(\xi^k\phi)|^2+|(V^\ast)^i(\xi^ku)|^2 d\xi\leq \text{mixed nonlinear terms}\,, \] where the mixed terms have the same form in \eqref{mixed}. Thus it suffices to show that those nonlinear terms are bounded by $\mathcal{E}^{k,4}(\phi,u)$. We focus on the intriguing term $VV^\ast V_tV^\ast(\xi^ku)$ as well as $V^\ast V_tV^\ast(\xi^ku)$, $ V_tV^\ast(\xi^ku)$. As we saw in the case $k=1$, in order to treat the mixed terms, the careful analysis of $\partial_t\phi$ terms is required. First, take a look at $V^\ast(\xi^ku)$ and $VV^{\ast}(\xi^k u)$. \begin{equation*} \begin{split} &\bullet\; V^\ast (\xi^k u)=-\frac{\phi^{2k}} {\xi^k}\partial_\xi u=\xi^k\partial_t\phi\\ &\bullet\; VV^{\ast}(\xi^k u)=\frac{1}{\xi^k}\partial_\xi[\phi^{2k}\partial_t\phi] =\frac{\phi^{2k+1}}{\xi^k}\partial_\xi[\frac{\partial_t\phi}{\phi}] +(2k+1)\frac{\phi^{2k}}{\xi^k}\partial_\xi\phi\frac{\partial_t\phi} {\phi}\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= \boxed{\frac{\phi^{2k+1}}{\xi^k}\partial_\xi[\frac{\partial_t\phi}{\phi}]} +V(\xi^k\phi) \frac{\partial_t\phi}{\phi} \end{split} \end{equation*} Because $||\frac{V(\xi^k\phi)}{\xi}||_{L^2_\xi}$ and $||\frac{VV^{\ast}(\xi^k u)}{\xi}||_{L^2_\xi}$ are bounded by $\mathcal{E}^{k,4}$ as an application of Lemma \ref{key}, we deduce that $||\phi^k\partial_\xi[\frac{\partial_t\phi}{\phi}]||_{L^2_\xi}$ is also bounded by $\mathcal{E}^{k,4}$. And by Lemma \ref{sup}, we obtain $\phi\partial_\xi[\frac{\partial_t\phi}{\phi}]\in L^\infty_\xi$. Now let us compute $V_tV^\ast(\xi^k u)$ and write in terms of the energy: \begin{equation*} \begin{split} \frac{1}{2k}V_tV^\ast (\xi^k u)&=\frac{1}{\xi^k}\partial_\xi [\phi^{2k+1}|\frac{\partial_t\phi}{\phi}|^2]=\frac{\partial_t\phi}{\phi}\{2 \frac{\phi^{2k+1}}{\xi^k}\partial_\xi[\frac{\partial_t\phi}{\phi}]+(2k+1) \frac{\phi^{2k}}{\xi^k}\partial_\xi\phi\frac{\partial_t\phi}{\phi}\}\\ &=2\frac{\partial_t\phi}{\phi}VV^\ast(\xi^k u)-|\frac{\partial_t\phi}{\phi}|^2V(\xi^k\phi) \end{split} \end{equation*} It is clear that this term is bounded by the energy functional. Next we write out $V^{\ast}VV^\ast (\xi^k u)$. \begin{equation}\label{box} \begin{split} &\bullet\; V^{\ast}VV^\ast (\xi^k u)=-\frac{\phi^{2k}}{\xi^k}\partial_\xi [\frac{\phi^{2k+1}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]]\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\,-(2k+1)\{ \frac{\phi^{2k}}{\xi^k}\partial_\xi[\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi\phi] \frac{\partial_t\phi}{\phi}+ \frac{\phi^{4k}}{\xi^{3k}}\partial_\xi\phi \partial_\xi[ \frac{\partial_t\phi}{\phi}]\}\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= -\frac{1}{\xi^k\phi}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]]-(2k+1) \frac{\phi^{2k}}{\xi^k}\partial_\xi[\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi\phi] \frac{\partial_t\phi}{\phi} \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= \boxed{-\frac{1}{\xi^k\phi}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]]} +V^\ast V(\xi^k\phi)\frac{\partial_t\phi}{\phi} \end{split} \end{equation} Thus we note that the boxed term $\frac{1}{\xi^k\phi}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]]$ is the right form of the second derivative of $\partial_t\phi$ of which structure we do not want to destroy. Here is $V^\ast V_tV^\ast(\xi^k u)$: \begin{equation*} \begin{split} &\frac{1}{2k}V^\ast V_tV^\ast (\xi^k u)=-\frac{\phi^{2k}}{\xi^k}\partial_\xi[\frac{1}{\xi^k}\frac{\partial_t\phi}{\phi}\{2 \frac{\phi^{2k+1}}{\xi^k}\partial_\xi[\frac{\partial_t\phi}{\phi}]+(2k+1) \frac{\phi^{2k}}{\xi^k}\partial_\xi\phi\frac{\partial_t\phi}{\phi}\}]\\ &=-2\frac{\partial_t\phi}{\phi}\frac{1}{\xi^k\phi}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]] -2\frac{\phi^{4k+1}}{\xi^{3k}}|\partial_\xi[\frac{\partial_t\phi}{\phi}]|^2 -(2k+1)|\frac{\partial_t\phi}{\phi}|^2\cdot \frac{\phi^{2k}}{\xi^k}\partial_\xi[\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi \phi] \end{split} \end{equation*} It is easy to see that the first and third terms are bounded by the energy functional. For the second term, writing it as $$-2\frac{\phi^{4k+1}}{\xi^{3k}} |\partial_\xi[\frac{\partial_t\phi}{\phi}]|^2 =\underbrace{-2\frac{\phi^{3k+1}}{\xi^{3k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]}_{L^\infty_\xi} \cdot \underbrace{\phi^k\partial_\xi[\frac{\partial_t\phi}{\phi}]} _{L^2_\xi}\,,$$ we deduce that it is also controlled by $\mathcal{E}^{k,4}$, and in result, we conclude that $V^\ast V_tV^\ast (\xi^k u)$ is bounded by $\mathcal{E}^{k,4}$. Next $VV^{\ast}VV^\ast (\xi^k u)$: \begin{equation*} \begin{split} \bullet\; VV^{\ast}VV^\ast (\xi^k u)=-\frac{1}{\xi^k}\partial_\xi[\frac{\phi^{2k-1}}{\xi^{2k}}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}} \partial_\xi[\frac{\partial_t\phi}{\phi}]]]-(2k+1)\frac{1}{\xi^k} \partial_\xi[\frac{\phi^{4k}}{\xi^{2k}} \partial_\xi[\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi\phi] \frac{\partial_t\phi}{\phi}]\\ =\boxed{-\frac{1}{\xi^k}\partial_\xi[\frac{\phi^{2k-1}}{\xi^{2k}}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}} \partial_\xi[\frac{\partial_t\phi}{\phi}]]]} +VV^\ast V(\xi^k\phi)\frac{\partial_t\phi}{\phi}+\frac{\phi^{2k}}{\xi^{2k}} {V^\ast V(\xi^k\phi)}\cdot\partial_\xi [\frac{\partial_t\phi}{\phi}] \end{split} \end{equation*} Note that the boxed term is bounded in $L^2_\xi$ by the energy functional. Now we write $VV^\ast V_tV^\ast (\xi^k u)$ in terms of the boxed terms as well as the energy: \begin{equation*} \begin{split} \frac{1}{2k}&VV^\ast V_tV^\ast (\xi^k u)=-2\frac{\partial_t\phi}{\phi}\cdot \frac{1}{\xi^k}\partial_\xi[ \frac{\phi^{2k-1}}{\xi^{2k}}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]]]\\ -&6\frac{\phi^{2k+1}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]\cdot \frac{1}{\xi^k\phi^2}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]] +(4k+6) \underline{\partial_\xi\phi\frac{\phi^{6k}}{\xi^{5k}}|\partial_\xi [\frac{\partial_t\phi}{\phi}]|^2}_{(\star)}\\ -&(2k+1)|\frac{\partial_t\phi}{\phi}|^2\cdot\frac{1}{\xi^k}\partial_\xi [\frac{\phi^{4k}}{\xi^{2k}}\partial_\xi [\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi\phi]]-(4k+2) \frac{\partial_t\phi}{\phi} \frac{\phi^{3k}}{\xi^{3k}}\partial_\xi[\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi \phi]\cdot\phi^k\partial_\xi[\frac{\partial_t\phi}{\phi}] \end{split} \end{equation*} It is clear that except $(\star)$, the first factor of each term in the right hand side is bounded in $L^\infty_\xi$ and the second factor is bounded in $L^2_\xi$ by $\mathcal{E}^{k,4}$. In order to show that $(\star)$ is bounded in $L^2_\xi$, first note that by Lemma \ref{key}, $||\frac{V^{\ast}VV^\ast (\xi^k u)}{\xi}||_{L^2_\xi}$ and $||\frac{V^{\ast}V (\xi^k \phi)}{\xi}||_{L^2_\xi}$ are bounded by $\mathcal{E}^{k,4}(\phi,u)$ due to the fact $k> \frac{1}{2}$, from the relation \eqref{box}, thus we get \[ h\equiv\frac{1}{\xi^{k+2}}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]]\in L_\xi^2\,, \] and in turn we obtain \[ \begin{split} \frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}] &=\int_0^\xi |\xi'|^{k+2}hd\xi'\leq \xi^{k+\frac{5}{2}}\|h\|_{L^2_\xi}\\ \Longrightarrow ||\xi^k |\partial_\xi[\frac{\partial_t\phi}{\phi}]|^2||_{L_\xi^2}^2&\leq ||h||^4_{L_\xi^2}\int_0^1 \xi^{2k+(4k+10)-(8k+8)}d\xi \end{split} \] Note that the last integral is bounded. Therefore we conclude that $||VV^\ast V_tV^\ast (\xi^k u)||_{L^2_\xi}$ is bounded by $\mathcal{E}^{k,4}$. Similarly, we deduce the same conclusion for other mixed terms and it finishes the proof of Proposition \ref{apriori} for $\frac{1}{2}<k<1$. \subsection{The case when $\lceil k\rceil\geq 2$ $(1<\gamma<3)$} We now turn into the general $k$. The spirit is the same as the case when $k=1$: we need to carry out $L^\infty_\xi$ estimates and nonlinear estimates. For large $k$, however, the number of mixed nonlinear terms increases accordingly and it is not an easy task to work term by term. We will present a systematic way of treating those terms involving derivatives of $\partial_t\phi$. The following lemma is a direct result from Lemma \ref{sup} and \ref{infty}, and it is useful to justify the assumption \eqref{AA} in $\mathcal{E}^{k,\lceil k\rceil +3}(\phi,u)$. \begin{lemma}\label{supk} (1) We obtain the following $L_\xi^\infty$ estimates: \[ ||\frac{V(\xi^k\phi)}{\xi^k}||_{L^\infty_\xi},\; ||\frac{V^\ast V(\xi^k\phi)}{\xi^k}||_{L^\infty_\xi},\; ||\frac{V^\ast(\xi^k u)}{\xi^k}||_{L^\infty_\xi},\;|| \frac{VV^\ast(\xi^k u)}{\xi^k}||_{L^\infty_\xi}\leq \mathcal{C}_3(\mathcal{E}^{k,\lceil k\rceil +3}(\phi,u)) \] where $\mathcal{C}_3(\mathcal{E}^{k,\lceil k\rceil +3}(\phi,u))$ is a continuous function of $||\frac{\xi}{\phi}||_{L_\xi^\infty}$ and $\mathcal{E}^{k,\lceil k\rceil +3}(\phi,u)$. \\ (2) For $3\leq i\leq \lceil k\rceil +1$ \[ ||\frac{(V)^i(\xi^k\phi)}{\xi^{\lceil k\rceil +2-i}}||_{L^\infty_\xi},\; ||\frac{(V^\ast)^i(\xi^k u)}{\xi^{\lceil k\rceil +2-i}}||_{L^\infty_\xi}\leq \mathcal{C}_4(\mathcal{E}^{k,\lceil k\rceil +3}(\phi,u)) \] where $\mathcal{C}_4(\mathcal{E}^{k,\lceil k\rceil +3}(\phi,u))$ is a continuous function of $||\frac{\xi}{\phi}||_{L_\xi^\infty}$ and $\mathcal{E}^{k,\lceil k\rceil +3}(\phi,u)$. \end{lemma} We start with $L^\infty_\xi$ estimate of $\phi$. We list out $V$, $V^{\ast}V$, $VV^{\ast}V$ of $\xi^{k}\phi$ for references. \begin{align*} \begin{split} &\bullet\; V(\xi^{k}\phi)= \frac{1}{\xi^{k}}\partial_\xi [\phi^{2k+1}]=(2k+1) \frac{\phi^{2k}} {\xi^{k}}\partial_\xi\phi\\ &\bullet\; V^{\ast}V(\xi^{k}\phi)=- (2k+1) \frac{\phi^{2k}} {\xi^{k}}\partial_\xi [\frac{\phi^{2k}} {\xi^{2k}}\partial_\xi\phi]\\ &\bullet\; VV^{\ast}V(\xi^{k}\phi)=-(2k+1) \frac{1}{\xi^{k}} \partial_\xi[ \frac{\phi^{4k}} {\xi^{2k}}\partial_\xi [\frac{\phi^{2k}} {\xi^{2k}}\partial_\xi\phi]] \end{split} \end{align*} One advantage of $V$ and $V^\ast$ formulation is that $L_\xi^\infty$ control of $\phi$ is cheap to get: \[ \int \xi^{k}\phi\cdot V(\xi^{k}\phi) d\xi=\int \phi\partial_\xi[\phi^{2k+1}]d\xi= \frac{2k+1}{2k+2}\phi^{2k+2} \] Applying Lemma \ref{supk}, we obtain that $\partial_\xi\phi$ is bounded and continuous if the assumption \eqref{AA} holds and $\mathcal{E}^{k,\lceil k\rceil+3}(\phi, u)$ is bounded. Next we turn to $u$ variable. \begin{align} \begin{split} &\bullet\; V^\ast (\xi^k u)=-\frac{\phi^{2k}} {\xi^k}\partial_\xi u\\ &\bullet\; VV^{\ast}(\xi^k u)=-\frac{1}{\xi^k}\partial_\xi[\frac{\phi^{4k}}{\xi^{2k}} \partial_\xi u]\\ &\bullet\; V^{\ast}VV^\ast (\xi^k u)=\frac{\phi^{2k}}{\xi^k}\partial_\xi [\frac{1}{\xi^{2k}}\partial_\xi[\frac{\phi^{4k}}{\xi^{2k}} \partial_\xi u]] \end{split}\label{uk} \end{align} Note that $\frac{\partial_t\phi}{\xi}$ can be estimated in terms of $\partial_\xi u$ via the equation \eqref{euler}: \begin{equation*} {\partial_t\phi}=-\frac{\phi^{2k}} {\xi^{2k}}\partial_\xi u=\frac{V^\ast (\xi^k u)}{\xi^k} \end{equation*} By Lemma \ref{supk}, we deduce that $\frac{VV^\ast(\xi^k u)}{\xi^k}=\frac{1}{\xi^{2k}}\partial_\xi[\phi^{2k}\partial_t\phi]$ is bounded and continuous if $\mathcal{E}^{k,\lceil k\rceil+3}(\phi,u)$ is bounded. Letting $h$ be $\frac{1}{\xi^{2k}}\partial_\xi[\phi^{2k}\partial_t\phi]$, we can write $\phi^{2k}{\partial_t\phi}=\int_0^\xi\xi'^{2k}h d\xi$, and therefore we conclude $\frac{\partial_t\phi}{\xi}$ is also bounded. By the same continuity argument as in $k=1$ case, we can verify the same boundary behavior for a short time and the assumption \eqref{AA}. We now perform the energy estimates. From \eqref{VVk}, we have the conservation of the zeroth order energy: \[ \frac{1}{2}\frac{d}{dt}\int \frac{1}{2k+1} |\xi^{k}\phi|^2+ |\xi^{k}u|^2 d\xi=0 \] Apply $V$ and $V^{\ast}$ to \eqref{VVk} and use $V_t(\xi^{k}\phi)=\frac{2k}{2k+1} \partial_tV(\xi^{k}\phi)$ to get \begin{equation} \begin{split} &\partial_tV(\xi^{k}\phi) -(2k+1)VV^{\ast}(\xi^{k}u)=0\\ &\partial_tV^{\ast}(\xi^{k}u) +\frac{1}{2k+1}V^{\ast}V(\xi^{k} \phi)=V_t^{\ast}(\xi^{k}u) \label{VVVk} \end{split} \end{equation} Apply $V^{\ast}$ and $V$ to \eqref{VVVk} and use $VV_t^\ast=V_tV^\ast$ to get \begin{equation}\label{i=2} \begin{split} &\partial_tV^{\ast}V(\xi^{k}\phi) -(2k+1)V^{\ast}VV^{\ast}(\xi^{k}u)=V_t^{\ast} V(\xi^{k}\phi)\\ &\partial_tVV^{\ast}(\xi^{k}u) +\frac{1}{2k+1}VV^{\ast}V(\xi^{k} \phi)=2V_t V^{\ast}(\xi^{k}u) \end{split} \end{equation} By keeping taking $V$ and $V^{\ast}$ alternatingly, one obtains higher order equations for $(V)^i(\xi^k\phi)$ and $(V^\ast)^i(\xi^k u)$ for any $i$. Indeed, the mixed terms in the right hand sides can be written in a systematic way. For $i=2j+1$ where $j\geq 1$: \begin{equation} \begin{split} \partial_t(V)^{2j+1}(\xi^{k}\phi) -(2k+1)(V^{\ast})^{2j+2}(\xi^{k}u)&= 2\sum_{l=0}^{j-1}(V^\ast)^{2j-2l-2}V_tV^\ast(V)^{2l+1}(\xi^{k}\phi)\\ \partial_t(V^{\ast})^{2j+1}(\xi^{k}u) +\frac{1}{2k+1}(V)^{2j+2}(\xi^{k} \phi)&=2\sum_{l=0}^{j-1}(V^\ast)^{2j-2l-1}V_tV^\ast(V^\ast)^{2l}(\xi^{k}u)\\ &\;\;\;+V_t^\ast (V^\ast)^{2j}(\xi^ku) \label{i=2j+1} \end{split} \end{equation} For $i=2j$ where $j\geq 2$: \begin{equation} \begin{split} \partial_t(V)^{2j}(\xi^{k}\phi) -(2k+1)(V^{\ast})^{2j+1}(\xi^{k}u)&=2\sum_{l=0}^{j-2} (V^\ast)^{2j-2l-3} V_tV^\ast(V)^{2l+1}(\xi^k\phi)\\ &\;\;\;+V_t^\ast (V)^{2j-1}(\xi^k\phi)\\ \partial_t(V^{\ast})^{2j}(\xi^{k}u) +\frac{1}{2k+1}(V)^{2j+1}(\xi^{k} \phi)&=2\sum_{l=0}^{j-1}(V^\ast)^{2j-2l-2}V_tV^\ast(V^\ast)^{2l}(\xi^{k}u) \label{i=2j} \end{split} \end{equation} Our main goal is to estimate the mixed terms in the right hand sides of \eqref{i=2j+1} and \eqref{i=2j}. Note that the most intriguing cases seem to be when $l=0$, where the most spatial derivatives of $\frac{\partial_t\phi}{\phi}$ are present. Before getting into the estimates, let us get the better understanding of the effect of the operator $V_t$. First, recall $(V^\ast)^i(\xi^k u)$'s. They give rise to $\partial_t\phi$ terms via the equations \eqref{euler} and furthermore they predict the right form of spatial derivatives of $\partial_t\phi$ terms. We borrow the computations from the previous section for $i\leq 4$. \begin{equation} \begin{split} &\bullet\; V^\ast (\xi^k u)=-\frac{\phi^{2k}} {\xi^k}\partial_\xi u=\xi^k\partial_t\phi\\ &\bullet\; VV^{\ast}(\xi^k u)=\frac{1}{\xi^k}\partial_\xi[\phi^{2k}\partial_t\phi] =\frac{\phi^{2k+1}}{\xi^k}\partial_\xi[\frac{\partial_t\phi}{\phi}] +(2k+1)\frac{\phi^{2k}}{\xi^k}\partial_\xi\phi\frac{\partial_t\phi} {\phi}\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= \boxed{\frac{\phi^{2k+1}}{\xi^k}\partial_\xi[\frac{\partial_t\phi}{\phi}]} +V(\xi^k\phi) \frac{\partial_t\phi}{\phi}\label{u2} \end{split} \end{equation} Now let us compute $V_tV^\ast(\xi^k u)$ and write in terms of the energy: \begin{equation} \begin{split} \frac{1}{2k}V_tV^\ast (\xi^k u)&=\frac{1}{\xi^k}\partial_\xi [\phi^{2k+1}|\frac{\partial_t\phi}{\phi}|^2]=\frac{\partial_t\phi}{\phi}\{2 \frac{\phi^{2k+1}}{\xi^k}\partial_\xi[\frac{\partial_t\phi}{\phi}]+(2k+1) \frac{\phi^{2k}}{\xi^k}\partial_\xi\phi\frac{\partial_t\phi}{\phi}\}\\ &=2\frac{\partial_t\phi}{\phi}VV^\ast(\xi^k u)-|\frac{\partial_t\phi}{\phi}|^2V(\xi^k\phi)\label{v_tvu} \end{split} \end{equation} Next we write out $V^{\ast}VV^\ast (\xi^k u)$. \begin{equation*} \begin{split} &\bullet\; V^{\ast}VV^\ast (\xi^k u)=-\frac{\phi^{2k}}{\xi^k}\partial_\xi [\frac{\phi^{2k+1}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]]-(2k+1)\{ \frac{\phi^{2k}}{\xi^k}\partial_\xi[\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi\phi] \frac{\partial_t\phi}{\phi}+ \frac{\phi^{4k}}{\xi^{3k}}\partial_\xi\phi \partial_\xi[ \frac{\partial_t\phi}{\phi}]\}\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= -\frac{1}{\xi^k\phi}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]]-(2k+1) \frac{\phi^{2k}}{\xi^k}\partial_\xi[\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi\phi] \frac{\partial_t\phi}{\phi} \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= \boxed{-\frac{1}{\xi^k\phi}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]]} +V^\ast V(\xi^k\phi)\frac{\partial_t\phi}{\phi} \end{split} \end{equation*} Thus we note that $\frac{1}{\xi^k\phi}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]]$ is the right form of the second derivative of $\partial_t\phi$ that we do not want to destroy. Here is $V^\ast V_tV^\ast(\xi^k u)$. \begin{equation} \begin{split} &\frac{1}{2k}V^\ast V_tV^\ast (\xi^k u)=-\frac{\phi^{2k}}{\xi^k}\partial_\xi[\frac{1}{\xi^k}\frac{\partial_t\phi}{\phi}\{2 \frac{\phi^{2k+1}}{\xi^k}\partial_\xi[\frac{\partial_t\phi}{\phi}]+(2k+1) \frac{\phi^{2k}}{\xi^k}\partial_\xi\phi\frac{\partial_t\phi}{\phi}\}]\\ &=-2\frac{\partial_t\phi}{\phi}\frac{1}{\xi^k\phi}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]] -2\frac{\phi^{4k+1}}{\xi^{3k}}|\partial_\xi[\frac{\partial_t\phi}{\phi}]|^2 -(2k+1)\frac{\phi^{2k}}{\xi^k}\partial_\xi[\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi \phi]|\frac{\partial_t\phi}{\phi}|^2\label{vv_tvu} \end{split} \end{equation} Next $VV^{\ast}VV^\ast (\xi^k u)$: \begin{equation} \begin{split} \bullet\; VV^{\ast}VV^\ast (\xi^k u)=-\frac{1}{\xi^k}\partial_\xi[\frac{\phi^{2k-1}}{\xi^{2k}}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}} \partial_\xi[\frac{\partial_t\phi}{\phi}]]]-(2k+1)\frac{1}{\xi^k} \partial_\xi[\frac{\phi^{4k}}{\xi^{2k}} \partial_\xi[\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi\phi] \frac{\partial_t\phi}{\phi}]\\ =\boxed{-\frac{1}{\xi^k}\partial_\xi[\frac{\phi^{2k-1}}{\xi^{2k}}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}} \partial_\xi[\frac{\partial_t\phi}{\phi}]]]} +VV^\ast V(\xi^k\phi)\frac{\partial_t\phi}{\phi}+\frac{\phi^{2k}}{\xi^{2k}} {V^\ast V(\xi^k\phi)}\cdot\partial_\xi [\frac{\partial_t\phi}{\phi}] \end{split}\label{V^4u} \end{equation} In turn, $VV^\ast V_tV^\ast (\xi^k u)$: \begin{equation*} \begin{split} \frac{1}{2k}&VV^\ast V_tV^\ast (\xi^k u)=-2\frac{\partial_t\phi}{\phi}\frac{1}{\xi^k}\partial_\xi[ \frac{\phi^{2k-1}}{\xi^{2k}}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]]]\\ &-6\frac{\phi^{2k+1}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]\cdot \frac{1}{\xi^k\phi^2}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]] +(4k+6)\partial_\xi\phi\frac{\phi^{6k}}{\xi^{5k}}|\partial_\xi [\frac{\partial_t\phi}{\phi}]|^2\\&-(2k+1)|\frac{\partial_t\phi}{\phi}|^2 \cdot\frac{1}{\xi^k}\partial_\xi [\frac{\phi^{4k}}{\xi^{2k}}\partial_\xi[\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi\phi]] -(4k+2)\frac{\partial_t\phi}{\phi} \frac{\phi^{3k}}{\xi^{3k}}\partial_\xi[\frac{\phi^{2k}}{\xi^{2k}}\partial_\xi \phi]\cdot\phi^k\partial_\xi[\frac{\partial_t\phi}{\phi}] \end{split} \end{equation*} As in the case $\frac{1}{2}<k\leq 1$, we chose the above specific expansion of $V^\ast VV^\ast(\xi^ku)$ in order to find the right expression of the second derivative of $\partial_t\phi$ and then estimate the mixed term $V^\ast V_tV^\ast(\xi^ku)$. This is because the nonlinearity of \eqref{euler} is prevalent at this level in $V,V^\ast$ formulation and an arbitrary expansion may destroy the structure of \eqref{euler}. For higher order terms, we employ a rather crude expansion in a systematic way. In the below, we present a representation for $(V^\ast)^i(\xi^k u)$, get the information of spatial derivatives of $\partial_t\phi$, which we denote by $T_i$'s, and derive the estimates of $(V^\ast)^{i-2} V_tV^\ast(\xi^ku)$. \\ \noindent{\textbf{Representation of $(V^\ast)^i(\xi^k u)$ and $T_i$ for $i\geq 2$}:} First, we define $T_i$ for $2\leq i\leq =\lceil k\rceil+3$: \begin{equation}\label{T_i} T_2\equiv \frac{\phi^{2k+1}}{\xi^{k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]; \;\;T_3\equiv -\frac{1}{\xi^k\phi}\partial_\xi [\frac{\phi^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi}{\phi}]];\;\;T_i \equiv (V)^{i-3}T_3\text{ for }i\geq 4 \end{equation} We chose $T_2$ and $T_3$ so that \[ (V^\ast)^2(\xi^k u)=T_2+ V(\xi^k\phi)\frac{\partial_t\phi}{\phi},\;\; (V^\ast)^3(\xi^k u)=T_3+ (V)^2(\xi^k\phi)\frac{\partial_t\phi}{\phi}\,. \] In other words, they are boxed terms in the above as well as in the previous sections. Note that by Lemma \ref{supk}, \begin{equation} \frac{T_2}{\xi^k}\in L_\xi^\infty,\;\;\frac{T_3}{\xi^{\lceil k\rceil -1}}\in L_\xi^\infty\,. \end{equation} The validity of $T_i$ will follow from the construction of $T_i$ in the below. Now we claim for any $i\geq 4$, $(V^\ast)^i(\xi^k u)$ has the following representation which extends the case $i=2,3$: \begin{equation}\label{expu} (V^\ast)^i(\xi^k u)=T_i+ (V)^{i-1}(\xi^k\phi)\frac{\partial_t\phi}{\phi} + \sum_{j=2}^{i-2}\Phi_{i-j} T_j \end{equation} where \begin{equation}\label{Phi} \Phi_{i-j}\equiv \sum_{r=0}^{i-j-2} \{C_{r} \frac{1}{\xi^{r}} \sum_{\substack{l_1+\cdots+l_p=i-j-r\\l_1,\dots, l_p\geq 1}} C_{l_1\cdots l_p} \prod_{q=1}^{p} \frac{(V)^{l_q}(\xi^k\phi)}{\xi^k\phi}\} \end{equation} for some functions $C_{r}$, $C_{l_1\cdots l_p}$ which may only depend on $\frac{\phi}{\xi}$ and $\frac{\xi}{\phi}$ and therefore $C_{r}$, $C_{l_1\cdots l_p}$ are bounded by $||\frac{\phi}{\xi}||_{L_\xi^\infty}$ and $||\frac{\xi}{\phi}||_{L_\xi^\infty}$. Furthermore, $T_i$ and the last term in \eqref{expu} have the following property: for each $4\leq i\leq \lceil k\rceil +3$, \begin{equation} ||\frac{1}{\xi^{\lceil k\rceil+3-i}}\sum_{j=2}^{i-2}\Phi_{i-j} T_j||_{L_\xi^2}\text{ and }||\frac{T_i}{\xi^{\lceil k\rceil+3-i}}||_{L_\xi^2} \text{ are bounded by }\mathcal{E}^{k,\lceil k\rceil+3}(\phi,u)\,, \label{L^2prop} \end{equation} and for $4\leq i\leq \lceil k\rceil+1$, \begin{equation} ||\frac{1}{\xi^{\lceil k\rceil+2-i}}\sum_{j=2}^{i-2}\Phi_{i-j} T_j||_{L_\xi^\infty}\text{ and } ||\frac{T_i}{\xi^{\lceil k\rceil+2-i}}||_{L_\xi^\infty}\text{ are bounded by }\mathcal{E}^{k,\lceil k\rceil+3}(\phi,u).\label{prop} \end{equation} In particular, each $T_i$ is well-defined. We are now ready to prove the representation formula \eqref{expu}. From the previous computation \eqref{V^4u}, \begin{equation}\label{v^4u2} (V^\ast)^4(\xi^k u)=T_4+(V)^3(\xi^k\phi)\frac{\partial_t\phi}{\phi} +\frac{(V)^2(\xi^k\phi)}{\xi^k\phi} T_2\,. \end{equation} Setting $\Phi_{4-2}=\frac{(V)^2(\xi^k\phi)}{\xi^k\phi}$, the formula \eqref{expu} holds for $i=4$. Moreover, by Lemma \ref{supk}, the properties \eqref{L^2prop} and \eqref{prop} are satisfied for $i=4$. For $i\geq 5$, based on the induction on $i$, we will show that $V$ or $V^\ast$ of each term in \eqref{expu} can be decomposed in the same fashion. With the aid of Lemma \ref{product rule}, applying $V$ or $V^\ast$ of the second term $(V)^{i-1}(\xi^k\phi)\frac{\partial_t\phi}{\phi}$ in the right hand side of \eqref{expu}, we obtain \[ \begin{split} \bullet\; &V((V)^{2j}(\xi^k\phi)\frac{\partial_t\phi}{\phi})=(V)^{2j+1}(\xi^k\phi) \frac{\partial_t\phi}{\phi}+\frac{(V)^{2j}(\xi^k\phi)}{\xi^k\phi}T_2\\ \bullet\;&V^\ast((V)^{2j+1}(\xi^k\phi)\frac{\partial_t\phi}{\phi})= (V)^{2j+2}(\xi^k\phi) \frac{\partial_t\phi}{\phi}-\frac{(V)^{2j+1}(\xi^k\phi)}{\xi^k\phi}{T_2} \end{split} \] Note that the right hand sides are of the right form. In the same spirit, we can apply $V$ or $V^\ast$ to the last term in \eqref{expu}. Here is the estimate of the simplest case when $j=2,\; r=0,\; l_1=i-2$. Consider $i=2j+2$ or $i=2j+3$. \[ \begin{split} \bullet\; &V^\ast(\frac{(V)^{2j}(\xi^k\phi)}{\xi^k\phi}{T_2})=2 \frac{V(\xi^k\phi)}{\xi^k\phi}\frac{V^{2j}(\xi^k\phi)}{\xi^k\phi}{T_2} -\frac{V^{2j+1}(\xi^k\phi)}{\xi^k\phi}{T_2}+ \frac{V^{2j}(\xi^k\phi)}{\xi^k\phi}{T_3}\\ \bullet\; &V(\frac{(V)^{2j+1}(\xi^k\phi)}{\xi^k\phi}{T_2})=-\frac{2}{2k+1} \frac{V(\xi^k\phi)}{\xi^k\phi}\frac{V^{2j+1}(\xi^k\phi)}{\xi^k\phi}{T_2} -\frac{V^{2j+2}(\xi^k\phi)}{\xi^k\phi}{T_2}+ \frac{V^{2j+1}(\xi^k\phi)}{\xi^k\phi}{T_3} \end{split} \] Each term in the right hand sides has a desirable form. From Lemma \ref{product rule}, we deduce that when $V$ or $V^\ast$ act on a function $h$, depending on that function, they can yield $\frac{h}{\xi}$ or $ \frac{V(\xi^k\phi)}{\xi^k}\frac{h}{\phi}$ or $Vh$ or $V^\ast h$. Therefore, $V$ or $V^\ast$ of other cases of ${\Phi_{i-j}T_j}$ falls into the right form of the case when $i+1$. Next we verify \eqref{L^2prop} and \eqref{prop}. Let $s\geq 4$ be given. Assume that they hold for $i\leq s$ and we first claim that \[ ||\frac{1}{\xi^{\lceil k\rceil+3-(s+1)}}\sum_{j=2}^{(s+1)-2} \Phi_{(s+1)-j} T_j||_{L_\xi^2}\text{ is bounded by }\mathcal{E}^{k, \lceil k\rceil+3}(\phi,u)\,. \] This can be justified by counting derivatives and $\frac{1}{\xi}$ and distributing $\frac{1}{\xi}$ factors in a right way. The highest derivative term with appropriate factor of $\frac{1}{\xi}$ will be taken as the main $L_\xi^2$ term and others will be bounded by taking the sup. Let $z\equiv \max\{j,l_1,\dots,l_p\}$ for each term. If $z=j$, we consider $\frac{T_j}{\xi^{\lceil k\rceil+3-j}}$ whose $L_\xi^2$ norm is bounded by $\mathcal{E}^{k,\lceil k\rceil+3} (\phi,u)$ from the induction hypothesis, since $j\leq s-1$, as an $L_\xi^2$ term. Now it remains to show that the rest of factors \[ \frac{\xi^{\lceil k\rceil+3-j}}{\xi^{\lceil k\rceil+3-(s+1)}} \sum_{r=0}^{(s+1)-j-2} \frac{1}{\xi^{r}} \sum_{\substack{l_1+\cdots+l_p=(s+1)-j-r\\l_1,\dots, l_p\geq 1}} \prod_{q=1}^{p} \frac{(V)^{l_q}(\xi^k\phi)}{\xi^k\phi} \] are bounded. It is enough to look at the following: for each $r\leq s-j-1$ \[ \frac{\xi^{s+1-j}}{\xi^{r}} \sum_{\substack{l_1+\cdots+l_p=s+1-j-r\\l_1,\dots, l_p\geq 1}} \prod_{q=1}^{p} \frac{(V)^{l_q}(\xi^k\phi)}{\xi^k\phi} \] which is clearly bounded by $\mathcal{E}^{k,\lceil k\rceil+3}(\phi,u)$ because of Lemma \ref{supk} and since $k\leq \lceil k\rceil$. When $z$ is one of $l_q$'s, one can derive the same conclusion. Thus from \eqref{expu}, we deduce that $||\frac{T_{s+1}}{\xi^{\lceil k\rceil+3-(s+1)}}||_{L_\xi^2}$ is also bounded and it finishes the verification of \eqref{L^2prop}. For the $L_\xi^\infty$ boundedness for $s+1\leq \lceil k\rceil+1$ \[ ||\frac{1}{\xi^{\lceil k\rceil+2-(s+1)}}\sum_{j=2}^{(s+1)-2} \Phi_{(s+1)-j} T_j||_{L_\xi^\infty} \] we employ the same counting argument except that we have to use the fact $\frac{T_2}{\xi^k},\;\frac{T_j}{\xi^{\lceil k\rceil+2-j}}\in L_\xi^\infty$ for $3\leq j \leq s$. This finishes the induction argument. These $T_i$'s and their properties will be useful to estimate the nonlinear terms involving $V_tV^\ast$. Based on the representation formula \eqref{expu} and $T_i$'s, we further study the representation of more general mixed terms $(V^\ast)^iV_t f$ for $i\leq \lceil k\rceil +1$. First $V_tf$ can be written as \[ \frac{1}{2k}V_tf=\frac{1}{\xi^k}\partial_\xi[\frac{\partial_t\phi} {\phi}\frac{\phi^{2k}}{\xi^k}f]=\frac{\partial_t\phi} {\phi}Vf +\frac{f}{\xi^k\phi} T_2\,. \] Note that the right hand side has the same structure as the last two terms of the right hand side of \eqref{v^4u2}, in letting $f$ be $(V)^2(\xi^k\phi)$. Thus we can apply the same technique to obtain the following: for each $i\leq \lceil k\rceil+1$ \begin{equation}\label{gt} \frac{1}{2k}(V^\ast)^iV_tf=\frac{\partial_t\phi} {\phi}(V)^{i+1}f+ \sum_{m=0}^i\{\sum_{j=2}^{m+2} \Phi_{(m+2)-j}T_j \} (V)^{i-m}f\,. \end{equation} \begin{proposition} (Nonlinear estimates) The right hand sides in \eqref{i=2j+1} and \eqref{i=2j} are bounded in $L_\xi^2$ by a continuous function of $\mathcal{E}^{k,\lceil k\rceil+3}(\phi,u)$ and $C$. \end{proposition} \begin{proof} We only treat the most intriguing term $(V^\ast)^iV_tV^\ast(\xi^ku)$ for $i\leq \lceil k\rceil+1$. The idea is to estimate it in terms of $T_j$'s and to use their properties. First, note that $V^\ast V_tV^\ast(\xi^ku)$ can be written in terms of $T_2,T_3$ as in \begin{equation}\label{V_t} \frac{1}{2k}V^\ast V_tV^\ast(\xi^ku)=2\frac{\partial_t\phi}{\phi}T_3- 2\frac{1}{\xi^k\phi}|T_2|^2+ V^\ast V(\xi^k\phi)|\frac{\partial_t\phi}{\phi}|^2 \end{equation} We would like to compute $(V)^iV^\ast V_tV^\ast(\xi^ku)$ for $1\leq i\leq \lceil k\rceil$. Consider $i=1$. We apply $V$ to each term in \eqref{V_t}. Since \[ \partial_\xi[\frac{\partial_t\phi}{\phi}]=\frac{\xi^k}{\phi^{2k+1}}T_2, \] we have \[ V(\frac{\partial_t\phi}{\phi}T_3)=\frac{\partial_t\phi}{\phi}T_{4} +\frac{\xi^k}{\phi^{2k+1}}T_2\cdot T_{3} \] \[ V(\frac{1}{\xi^k\phi}|T_2|^2)=\frac{2}{\xi^k\phi}T_2\cdot T_3- \frac{4k+3}{2k+1}\frac{1}{\xi^k\phi}\frac{V(\xi^k\phi)}{\xi^k\phi}|T_2|^2 \] \[ V(V^\ast V(\xi^k\phi)|\frac{\partial_t\phi}{\phi}|^2)=VV^\ast V(\xi^k\phi)|\frac{\partial_t\phi}{\phi}|^2+ 2\frac{\xi^k}{\phi^{2k+1}} T_2\cdot V^\ast V(\xi^k\phi)\frac{\partial_t\phi}{\phi} \] Thus we deduce that $VV^\ast V_tV^\ast(\xi^ku)$ is bounded in $L^2_\xi$ by the energy functional. In the same vein, by keeping applying $V^\ast$ and $V$, for any $i\geq 2$, we can write $(V)^iV^\ast V_tV^\ast(\xi^ku)$ as follows: \begin{equation}\label{T} \begin{split} \frac{1}{2k}(V)^iV^\ast V_tV^\ast(\xi^ku)=2\frac{\partial_t\phi}{\phi} T_{i+3}+(V)^{i+2}(\xi^k \phi)|\frac{\partial_t\phi}{\phi}|^2 +\frac{\partial_t\phi}{\phi}\cdot\sum_{j=2}^{i+1} \Phi_{(i+3)-j} T_j\\ +\sum_{j=2}^{i+2}C_j\frac{1}{\xi^k\phi}T_{i+4-j}T_j +\sum_{s=4}^{i+2}\{ \Phi_{(i+4)-s} (\sum_{j=2}^{s-2}C_{sj}\frac{1}{\xi^k\phi}T_{s-j}T_j)\} \end{split} \end{equation} where $\Phi$ is given as in \eqref{Phi} with possibly different coefficient functions and $C_j$ and $C_{sj}$ are some functions bounded by $||\frac{\phi}{\xi}||_{L_\xi^\infty}$ and $||\frac{\xi}{\phi}||_{L_\xi^\infty}$. This formula \eqref{T} can be also obtained by plugging \eqref{expu} into \eqref{gt}. Now we claim that for each $1\leq i\leq \lceil k\rceil$, $||(V)^iV^\ast V_tV^\ast(\xi^ku)||_{L_\xi^2}$ is bounded by $\mathcal{E}^{k,\lceil k\rceil+3}(\phi,u)$. The first three terms in the right hand side of \eqref{T} are bounded since they are of the form as in \eqref{expu} multiplied with $\frac{\partial_t\phi}{\phi}$. The rest terms consist of quadratic or higher of energy terms or $T_j$'s. For each term, the highest derivative with appropriate factor of $\frac{1}{\xi}$ is considered the main $L_\xi^2$ term and other factors are bounded by taking the sup. This can be done by employing the counting and distributing $\frac{1}{\xi}$ argument as well as the estimates of $T_j$'s \eqref{L^2prop} and \eqref{prop} as before. \end{proof} \section{Approximate Scheme}\label{5} In this section, we implement the linear approximate scheme and prove that the linear system is well-posed in some energy space. Let the initial data $\phi_{0}(\xi)$ and $u_{0}(\xi)$ of the Euler equations \eqref{euler} be given such that $\frac{1}{C_0}\leq \frac{\phi_{0}}{\xi}\leq C_0$ for a constant $C_0>1$, and $\mathcal{E}^{k,\lceil k\rceil+3}(\phi_{0},u_{0})\leq A$ for a constant $A>0$. Here $ \mathcal{E}^{k,\lceil k\rceil+3}(\phi_0,u_0)$ is the energy functional \eqref{ef} induced by $\phi_0$. Note that from the energy bound, we obtain $\frac{\partial_\xi u_0}{\xi}\in L^\infty_\xi$. We will construct approximate solutions $\phi_n(t,\xi)$ and $u_n(t,\xi)$ for each $n$ by induction satisfying the following properties: \begin{equation}\label{n} \phi_n|_{t=0}=\phi_{0},\;u_n|_{t=0}=u_{0};\; \phi_n|_{\xi=0}=u_n|_{\xi=1}=0;\; \frac{1}{C_n}\leq\frac{\phi_n}{\xi}\leq C_n,\text{ for }C_n>1 \end{equation} Note that $\phi_0,u_0$ automatically satisfy \eqref{n}. Define the operators $V_n$ and $V_n^{\ast}$ as follows: \begin{equation} \begin{split} V_n(f)\equiv \frac{1}{\xi^k}\partial_\xi (\frac{\phi_n^{2k}}{\xi^k}f),\;\;\; V_n^{\ast}(g)\equiv-\frac{\phi_n^{2k}}{\xi^k}\partial_\xi( \frac{1}{\xi^k} g) \end{split}\label{v_nk} \end{equation} In addition, we define commutator operators $(V_n)_t$ and $(V_n^\ast)_t$: \[ (V_n)_t(f)\equiv 2k\frac{1}{\xi^k}\partial_\xi[\frac{\phi_n^{2k-1}\partial_t\phi_n}{\xi^k}f],\;\; (V_n^\ast)_t(g)\equiv -2k\frac{\phi_n^{2k-1}\partial_t\phi_n}{\xi^k} \partial_\xi[\frac{g}{\xi^k}] \] We define $\partial_t\phi_0$ through the equation by \[ \partial_t\phi_0\equiv -\frac{\phi_0^{2k}}{\xi^{2k}}\partial_\xi u_0\,. \] For the linear iteration scheme, we approximate $(V)^3(\xi^k\phi)\equiv G$ and $(V^\ast)^3(\xi^ku)\equiv F$ and the equations \eqref{i=2j+1} when $j=1$, instead of $\phi$ and $u$ themselves. Let \[ D_0\equiv V_0(\xi^k\phi_0),\;H_0\equiv V^\ast_0(\xi^k u_0),\;G_0\equiv V_0V^\ast_0 V_0(\xi^k\phi_0),\;F_0\equiv V_0^\ast V_0V^\ast_0(\xi^k u_0). \] For each $n\geq 0$, consider the following approximate equations \begin{equation}\label{FG} \begin{split} &\partial_tG_{n+1} -(2k+1)V_n F_{n+1}=J_n^1\\ &\partial_tF_{n+1} +\frac{1}{2k+1}V_n^\ast G_{n+1}=J_n^2 \end{split} \end{equation} where $J_n^1$ and $J_n^2$ are given as follows: \[ \begin{split} J_n^1&\equiv 4k\frac{\partial_t\phi_n}{\phi_n}G_n+ 4k\frac{\phi_n^{2k}}{\xi^k} \partial_\xi[\frac{\partial_t\phi_n}{\phi_n}] \frac{V_{n}^\ast D_n}{\xi^k}\\ J_n^2&\equiv 2k\frac{\partial_t\phi_n}{\phi_n}F_n+ 4k|\frac{\partial_t\phi_n}{\phi_n}|^2V_{n}^\ast D_n -8k\frac{\phi_n^{4k+1}}{\xi^{3k}} |\partial_\xi[\frac{\partial_t\phi_n}{\phi_n}]|^2 -8k\frac{\partial_t \phi_n}{\phi_n}\frac{1}{\xi^k\phi_n}\partial_\xi [\frac{\phi_n^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi_n}{\phi_n}]] \end{split} \] The initial and boundary conditions are inherited from the original system: $$G_{n+1}|_{t=0}=V_0V^\ast_0 V_0(\xi^k\phi_0),\, F_{n+1}|_{t=0}=V_0^\ast V_0V^\ast_0(\xi^k u_0),\,G_{n+1}|_{\xi=1}=0\,.$$ Note that $F_{n+1}|_{\xi=0}$ is built in the equations \eqref{FG}. In turn, from these $F_{n+1},\,G_{n+1}$, we define $D_{n+1},\;H_{n+1},\;\phi_{n+1},\; u_{n+1}$: \[ \begin{split} D_{n+1}&\equiv -\xi^k\int_1^\xi\frac{\xi'^{2k}}{\phi_n^{4k}}\int_0^{\xi'} \xi_1^{k}G_{n+1}d\xi_1 d\xi',\;H_{n+1}\equiv -\frac{\xi^k}{\phi_n^{2k}} \int_0^\xi \xi'^k\int_1^{\xi'} \frac{\xi_1^k}{\phi_n^{2k}}F_{n+1}d\xi_1d\xi'\\ \phi_{n+1}&\equiv \{\int_0^\xi \xi'^k D_{n+1}(t,\xi')d\xi'\}^{\frac{1}{2k+1}},\; \partial_\xi u_{n+1}\equiv-\frac{\xi^k H_{n+1}}{\phi^{2k}_{n+1}},\;u_{n+1}\equiv \int_1^\xi \partial_\xi u_{n+1} d\xi' \end{split} \] Note that we have used the boundary condition at $\xi=1$ in order to invert $V^\ast_n$. Also note that from the above definitions the following identities hold $$V_nV_n^\ast D_{n+1}=G_{n+1},\, V_n^\ast V_nH_{n+1}=F_{n+1},\, D_{n+1}=V_{n+1}(\xi^k\phi_{n+1}),\,H_{n+1}= V_{n+1}^\ast(\xi^k u_{n+1})\,.$$ In view of Proposition \ref{EE}, it is easy to deduce that $(V_n^\ast)^i D_{n+1}$ and $(V_n)^i H_{n+1}$ for $0\leq i\leq \lceil k\rceil +2$ are well-defined, namely bounded in $L^2_\xi$. Also $V_{n+1}$ and $V^\ast_{n+1}$ are defined as in \eqref{v_nk} with $\phi_{n+1}$. The right hand sides $J_n^1,$ $J_n^2$ of \eqref{FG} are approximations of $2(V_n)_tV_n^\ast V_n(\xi^k\phi_n)$ and $2V_n^\ast (V_n)_tV_n^\ast (\xi^k u_{n}) +(V_n^\ast)_tV_nV_n^\ast (\xi^k u_{n})$ in the following manner: \[ \begin{split} \frac{1}{2k}&(V_n)_tV_n^\ast V_n(\xi^k\phi_n)=-(2k+1)\frac{1}{\xi^k}\partial_\xi[ \frac{\phi_n^{2k-1}\partial_t\phi_n}{\xi^k}\frac{\phi_n^{2k}}{\xi^k} \partial_\xi [\frac{\phi_n^{2k}}{\xi^{2k}}\partial_\xi\phi_n]]\\ =& -(2k+1)\frac{1}{\xi^k}\partial_\xi[\frac{\partial_t\phi_n}{\phi_n}\cdot \frac{\phi_n^{4k}}{\xi^{2k}}\partial_\xi [\frac{\phi_n^{2k}}{\xi^{2k}}\partial_\xi\phi_n]] =\frac{\partial_t\phi_n}{\phi_n}V_nV_n^\ast V_n(\xi^k\phi_n)+ \frac{\phi_n^{2k}}{\xi^k} \partial_\xi[\frac{\partial_t\phi_n}{\phi_n}] \frac{V_{n}^\ast V_n(\xi^k\phi_n)}{\xi^k}\\ \sim &\; \frac{\partial_t\phi_n}{\phi_n}G_n+ \frac{\phi_n^{2k}}{\xi^k} \partial_\xi[\frac{\partial_t\phi_n}{\phi_n}] \frac{V_{n}^\ast D_n}{\xi^k} \end{split} \] \[ \begin{split} \frac{1}{2k}&V_n^\ast (V_n)_tV_n^\ast (\xi^k u_{n}) \sim -2\frac{\partial_t \phi_n}{\phi_n}\frac{1}{\xi^k\phi_n}\partial_\xi [\frac{\phi_n^{4k+2}}{\xi^{2k}}\partial_\xi[\frac{\partial_t\phi_n}{\phi_n}]] -2\frac{\phi_n^{4k+1}}{\xi^{3k}} |\partial_\xi[\frac{\partial_t\phi_n}{\phi_n}]|^2 +|\frac{\partial_t\phi_n}{\phi_n}|^2V_{n}^\ast D_n\\ \frac{1}{2k}&(V_n^\ast)_tV_nV_n^\ast (\xi^k u_{n}) =\frac{\partial_t\phi_n}{\phi_n}V_n^\ast V_nV_n^\ast(\xi^k u_n)\sim \frac{\partial_t\phi_n}{\phi_n}F_n \end{split} \] In particular, note that the approximation of $V_n^\ast (V_n)_tV_n^\ast (\xi^k u_{n})$, which has the strongest nonlinearity in view of the a priori estimates, is based on the expression \eqref{vv_tvu}. Also note that the equation \eqref{FG} converges to \eqref{i=2j+1} for $j=1$ in the formal limit. We define the approximate energy functional $\widetilde{\mathcal{E}}^k_{n+1}$ at the $n$-th step: \begin{equation}\label{aef} \begin{split} \widetilde{\mathcal{E}}^k_{n+1}(t)&\equiv \sum_{i=0}^{\lceil k\rceil}\int \frac{1}{(2k+1)^2}|(V_n^\ast)^iG_{n+1}|^2 +|(V_n)^iF_{n+1}|^2d\xi\\ &\equiv ||\frac{1}{2k+1}G_{n+1}||^2_{Y_n^{k,\lceil k\rceil}}+ ||F_{n+1}||^2_{X_n^{k,\lceil k\rceil}} \end{split} \end{equation} where $X_n^{k,s}\text{ and } Y_n^{k,s}$ denote $X^{k,s}$ and $Y^{k,s}$ induced by $V_n$ and $V_n^\ast$ in \eqref{XY}. We now state and prove that the approximate system \eqref{FG} are well-posed in the energy space generated by $X_n,$ $Y_n$ under the following induction hypotheses: \begin{equation*} \begin{split} \text{(HP1) } &\widetilde{\mathcal{E}}^k_{n}<\infty\text{ and } \phi_n,\, u_{n}\text{ satisfy } \eqref{n}\,;\\ \text{(HP2) }&\text{when }k\leq 1, ||\frac{\partial_t\phi_n}{\phi_n}||_{L_\xi^\infty}\text{ and } ||\xi\partial_\xi[\frac{\partial_t\phi_n}{\phi_n}]||_{L_\xi^\infty} \text{ are bounded by }\widetilde{\mathcal{E}}^k_{n},\, \widetilde{\mathcal{E}}^k_{n-1};\\ &\text{when }k>1, ||\frac{\partial_t\phi_n}{\phi_n}||_{L_\xi^\infty}, ||\frac{T_{2,n}}{\xi^k}||_{L_\xi^\infty}, ||\frac{T_{i,n}}{\xi^{\lceil k\rceil +2-i}}||_{L_\xi^\infty}, ||\frac{(V_n)^i(\xi^k\phi_n)}{\xi^{\lceil k\rceil +2-i}}||_{L_\xi^\infty}\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\text{ for }3\leq i\leq \lceil k\rceil+1\text{ are bounded by }\widetilde{\mathcal{E}}^k_{n}, \,\widetilde{\mathcal{E}}^k_{n-1};\\ \text{(HP3) }& J_n^1\text{ and }J_n^2\text{ in }\eqref{FG} \text{ are bounded in }Y_n^{k,\lceil k\rceil},\, X_n^{k,\lceil k\rceil} \text{ respectively.} \end{split} \end{equation*} Here $T_{i,n}$ is the $T_i$ defined in \eqref{T_i} where $\phi$ is taken as $\phi_n$. \begin{proposition}\label{EE} (Well-posedness of approximate system and regularity) Under the hypotheses (HP1), (HP2), and (HP3), the linear system \eqref{FG} admits a unique solution $(G_{n+1},H_{n+1})$ in $Y_n^{k,\lceil k\rceil}$, $X_n^{k,\lceil k\rceil}$ space. Furthermore, we obtain the following energy bounds. \begin{equation*} \widetilde{\mathcal{E}}^k_{n+1}(t)\leq \widetilde{\mathcal{E}}^k_{n+1}(0)+\int_0^t \mathcal{C}_5(\widetilde{\mathcal{E}}^k_{n-1},\widetilde{\mathcal{E}}^k_{n}, \widetilde{\mathcal{E}}^k_{n+1}) (\widetilde{\mathcal{E}}^k_{n+1})^{\frac{1}{2}}d\tau \end{equation*} where $\mathcal{C}_5(\widetilde{\mathcal{E}}^k_{n-1},\widetilde{\mathcal{E}}^k_{n}, \widetilde{\mathcal{E}}^k_{n+1})$ is a continuous function of $\widetilde{\mathcal{E}}^k_{n-1},\; \widetilde{\mathcal{E}}^k_{n},\;\widetilde{\mathcal{E}}^k_{n+1}$ and $C_0$. \end{proposition} Proposition \ref{EE} directly follows from Proposition 7.1. In the next subsection, we verify the induction hypotheses. \subsection{Induction procedure} In order to finish the induction procedure of approximate schemes, it now remains to verify the induction hypotheses (HP1), (HP2), and (HP3) for $n+1$ as described in Proposition \ref{EE}. The spirit is the same as in the a priori estimates. However, since $n$ and $n+1$ are mingled in the energy functional \eqref{aef}, the verification of the induction hypotheses needs an attention. We only treat the case when $k=1$. Other cases can be estimated similarly. First, it is easy to see that $\phi_{n+1}$ and $u_{n+1}$ constructed in the above satisfy the initial boundary conditions in \eqref{n} for $n+1$. The boundedness of $\frac{\phi_{n+1}}{\xi}$ will follow from the continuity argument by using the estimate of $\frac{\partial_t\phi_{n+1}}{\xi}$. Write the equation for $\partial_t(\phi_{n+1}^3)$ from the definition of $\phi_{n+1}$'s: \begin{equation} \begin{split} 3\phi_{n+1}^2\partial_t\phi_{n+1}&= 4\int_0^\xi (\xi')^2\int_1^{\xi'}\frac{\xi_2^2\partial_t\phi_{n}}{\phi_{n}^5} \int_0^{\xi_2} \xi_1 G_{n+1}d\xi_1d\xi_2d\xi'\\ &\;\;\;-\int_0^\xi (\xi')^2\int_1^{\xi'}\frac{\xi_2^2}{\phi_{n}^4} \int_0^{\xi_2} \xi_1 \partial_tG_{n+1}d\xi_1d\xi_2d\xi'\\ &=4\int_0^\xi (\xi')^2\int_1^{\xi'}\frac{\xi_2^2\partial_t\phi_{n}}{\phi_{n}^5} \int_0^{\xi_2} \xi_1 G_{n+1}d\xi_1d\xi_2d\xi'\\ &\;\;\;-3\int_0^\xi (\xi')^2\int_1^{\xi'}\frac{\xi_2}{\phi_{n}^2}F_{n+1} d\xi_2d\xi'-\int_0^\xi (\xi')^2\int_1^{\xi'}\frac{\xi_2^2}{\phi_{n}^4} \int_0^{\xi_2} \xi_1 J_{n}^1d\xi_1d\xi_2d\xi' \end{split}\label{dt} \end{equation} The first term can be controlled as follows: since \begin{equation} \int_0^{\xi_2} \xi_1 G_{n+1}d\xi_1\leq \xi_2^{\frac{5}{2}} ||\frac{G_{n+1}}{\xi}||_{L^2_\xi},\label{g1} \end{equation} it follows that \[ \int_0^\xi (\xi')^2\int_1^{\xi'}\frac{\xi_2^2\partial_t\phi_{n}}{\phi_{n}^5} \int_0^{\xi_2} \xi_1 G_{n+1}d\xi_1d\xi_2d\xi'\leq \xi^{\frac{7}{2}} ||\frac{\xi}{\phi_{n}}||^4_{L^\infty_\xi} ||\frac{\partial_t\phi_{n}}{\phi_{n}}||_{L^\infty_\xi} ||\frac{G_{n+1}}{\xi}||_{L^2_\xi}\,. \] Thus $\frac{1}{\xi^3} \int_0^\xi (\xi')^2\int_1^{\xi'}\frac{\xi_2^2\partial_t\phi_{n}}{\phi_{n}^5} \int_0^{\xi_2} \xi_1 G_{n+1}d\xi_1d\xi_2d\xi'$ is bounded by $\widetilde{\mathcal{E}}_{n+1}$ and the previous energies. For the second term, note that $$\int_1^{\xi'}\frac{\xi_2}{\phi_{n}^2}F_{n+1} d\xi_2\leq ||\frac{\xi}{\phi_{n}}||^2_{L^\infty_\xi} ||\frac{F_{n+1}}{\xi}||_{L^2_\xi}\,.$$ Hence, we obtain \[ \frac{1}{\xi^3}\int_0^\xi (\xi')^2\int_1^{\xi'}\frac{\xi_2}{\phi_{n}^2}F_{n+1} d\xi_2d\xi'\leq ||\frac{\xi}{\phi_{n}}||^2_{L^\infty_\xi} ||\frac{F_{n+1}}{\xi}||_{L^2_\xi}\,. \] Finally, since \begin{equation} \begin{split} \int_0^{\xi_2} \xi_1 J_{n}^1d\xi_1&\leq \xi_2^{\frac{5}{2}}||\frac{\partial_t\phi_{n}}{\phi_{n}}||_{L_\xi^\infty} ||\frac{G_{n}}{\xi}||_{L^2_\xi} \\&\;\;+\xi_2^{\frac{5}{2}}||\frac{\phi_n}{\xi}||^2_{L^\infty_\xi} ||\frac{\phi_n}{\phi_{n-1}}||^2_{L^\infty_\xi} ||\xi\partial_\xi [\frac{\partial_t\phi_{n}}{\phi_{n}}]||_{L^\infty_\xi} ||\frac{V_{n-1}^\ast D_{n}}{\xi}||_{L^2_\xi} \end{split}\label{g2} \end{equation} the last term is also bounded and therefore we conclude that $\partial_t(\frac{\phi_{n+1}^3}{\xi^3})$ is bounded by $\widetilde{\mathcal{E}}_{n+1}$ and the previous energies. Note that the nontrivial contribution near the boundary $\xi=0$ is coming from the second term $F^{n+1}$. Since \[ \frac{\phi_{n+1}^3}{\xi^3}(t)=\frac{\phi_{n+1}^3}{\xi^3}(0)+\int_0^t \partial_t(\frac{\phi_{n+1}^3}{\xi^3})d\tau=\frac{\phi_{0}^3}{\xi^3}(0)+\int_0^t \partial_t(\frac{\phi_{n+1}^3}{\xi^3})d\tau\,, \] we get \[ \frac{1}{C_0^3}- T||\partial_t(\frac{\phi_{n+1}^3}{\xi^3})||_{L^\infty_\xi} \leq \frac{\phi_{n+1}^3}{\xi^3}\leq C_0^3+ T||\partial_t(\frac{\phi_{n+1}^3}{\xi^3})||_{L^\infty_\xi} \] and in result, we also obtain $C_{n+1}$ in \eqref{n} depending on $T$ and energy bounds. We move onto (HP2). Since $\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}= \frac{\partial_t\phi_{n+1}}{\xi}\frac{\xi}{\phi_{n+1}}$, we immediately deduce that $||\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}||_{L_\xi^\infty}$ is bounded. Take $\partial_\xi $ of \eqref{dt}: \begin{equation} \begin{split} 3\phi_{n+1}^3\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}] + 9\phi_{n+1}\partial_\xi\phi_{n+1}\partial_t\phi_{n+1}= 4\xi^2\int_1^{\xi}\frac{\xi_2^2\partial_t\phi_{n}}{\phi_{n}^5} \int_0^{\xi_2} \xi_1 G_{n+1}d\xi_1d\xi_2\\ -3\xi^2\int_1^{\xi}\frac{\xi_2}{\phi_{n}^2}F_{n+1} d\xi_2-\xi^2\int_1^{\xi}\frac{\xi_2^2}{\phi_{n}^4} \int_0^{\xi_2} \xi_1 J_{n}^1d\xi_1d\xi_2 \end{split}\label{ddt} \end{equation} Therefore the boundedness of $||\xi\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]||_{L_\xi^\infty}$ follows by the same reasoning. For (HP3), we now show that $J_{n+1}^1$ and $J_{n+1}^2$ of \eqref{FG} at the next step are bounded in $Y_{n+1}^{1,1},$ $X_{n+1}^{1,1}$ accordingly. \begin{claim} $V_{n+1}^\ast J_{n+1}^1$ and $V_{n+1}J_{n+1}^2$ are bounded in $L_\xi^2$. \end{claim} \begin{proof} The sprit of the proof is same as in the nonlinear estimates of the a priori estimates. We present the detailed computation for $J_{n+1}^2$, which is more complicated than $J_{n+1}^1$. We start with $V_{n+1}(\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}F_{n+1})$. \[ \begin{split} V_{n+1}(\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}F_{n+1})&=\frac{1}{\xi} \partial_\xi[\frac{\phi_{n+1}^2}{\xi}\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}F_{n+1}] =\frac{1}{\xi} \partial_\xi[\frac{\phi_{n+1}^2}{\phi_n^2}\frac{\partial_t\phi_{n+1}}{\phi_{n+1}} \frac{\phi_{n}^2}{\xi}F_{n+1}]\\ &=\underbrace{\frac{\phi_{n+1}^2}{\phi_n^2}\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}} _{L^\infty_\xi}\underbrace{ V_nF_{n+1}}_{L^2_\xi} + \underbrace{\frac{\phi_{n+1}^2}{\xi^2} \xi\partial_\xi[\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]}_{L^\infty_\xi} \underbrace{\frac{F_{n+1}}{\xi}}_{L^2_\xi}\\ &\;\;+\underbrace{2(\frac{\phi_{n+1}}{\xi}\partial_\xi\phi_{n+1} -\frac{\phi_{n+1}^2}{\xi\phi_n}\partial_\xi\phi_{n}) \frac{\partial_t\phi_{n+1}}{\phi_{n+1}}}_{L^\infty_\xi} \underbrace{\frac{F_{n+1}}{\xi}}_{L^2_\xi} \end{split} \] Thus $||V_{n+1}(\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}F_{n+1})||_{L^2_\xi}$ is bounded by $\widetilde{\mathcal{E}}_{n+1}$. Next we compute $V_{n+1} (|\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}|^2V_{n+1}^\ast D_{n+1})$. \[ \begin{split} V_{n+1} (|\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}|^2V_{n+1}^\ast D_{n+1})&= - \frac{1}{\xi}\partial_\xi[\frac{\phi_{n+1}^4}{\xi^2}\partial_\xi [\frac{D_{n+1}}{\xi}]|\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}|^2]\\ &=\underbrace{\frac{\phi_{n+1}^4}{\phi_n^4}|\frac{\partial_t\phi_{n+1}} {\phi_{n+1}}|^2}_{L^\infty_\xi}\underbrace{G_{n+1}}_{L^2_\xi} +\underbrace{\frac{\phi_{n+1}^4}{\xi^2\phi_n^2} \frac{\partial_t\phi_{n+1}}{\phi_{n+1}}\xi\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]}_{L^\infty_\xi} \underbrace{\frac{V_n^\ast D_{n+1}}{\xi}}_{L^2_\xi}\\ &\;\;+ \underbrace{4 (\frac{\phi_{n+1}^3}{\xi\phi_n^2} \partial_\xi\phi_{n+1}-\frac{\phi_{n+1}^4}{\xi\phi_n^3}\partial_\xi\phi_n) |\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}|^2}_{L^\infty_\xi} \underbrace{\frac{V_n^\ast D_{n+1}}{\xi}}_{L^2_\xi} \end{split} \] Thus $V_{n+1} (|\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}|^2V_{n+1}^\ast D_{n+1})$ is bounded in $L^2_\xi$. In order to take care of the rest of $J_{n+1}^2$, first we claim that \begin{equation} ||\frac{1}{\xi^3}\partial_\xi[\frac{\phi_{n+1}^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]]||_{L^2_\xi}\text{ is bounded by }\widetilde{\mathcal{E}}_{n+1},\,\widetilde{\mathcal{E}}_{n}\,.\label{claim} \end{equation} In order to do so, multiply \eqref{ddt} by $\frac{\phi_{n+1}^3}{3\xi^2}$ and take $\partial_\xi$ to get \[ \begin{split} &\partial_\xi[\frac{\phi_{n+1}^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]]+3\partial_\xi[\phi_{n+1}^3] \frac{\phi_{n+1}^2}{\xi^2}\partial_\xi\phi_{n+1} \frac{\partial_t\phi_{n+1}}{\phi_{n+1}}\\&\;\;+3\phi_{n+1}^3\partial_\xi [\frac{\phi_{n+1}^2}{\xi^2}\partial_\xi\phi_{n+1}] \frac{\partial_t\phi_{n+1}}{\phi_{n+1}} +\underline{3\phi_{n+1}^3 \frac{\phi_{n+1}^2}{\xi^2}\partial_\xi\phi_{n+1}\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]}_{(\star)}\\&= \phi_{n+1}^2\partial_\xi\phi_{n+1}\{4\int_1^\xi Ad\xi'-3\int_1^\xi Bd\xi'-\int_1^\xi Cd\xi'\}+ \frac{\phi_{n+1}^3}{3}\{4A-3B-C\} \end{split} \] where $A,B,C$ are defined so that \eqref{ddt} can be written as $$3\phi_{n+1}^3\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}] + 9\phi_{n+1}\partial_\xi\phi_{n+1}\partial_t\phi_{n+1}=\xi^2\{ 4\int_1^\xi Ad\xi'-3\int_1^\xi Bd\xi'-\int_1^\xi Cd\xi'\}\,.$$ The term $(\star)$ becomes \[ (\star)=\frac{\phi_{n+1}^2}{\xi^2}\partial_\xi\phi_{n+1}\{ -9 \phi_{n+1}\partial_\xi\phi_{n+1}\partial_t\phi_{n+1}+\xi^2(4\int_1^\xi Ad\xi'-3\int_1^\xi Bd\xi'-\int_1^\xi Cd\xi')\} \] Hence by canceling terms, we obtain \begin{equation} \begin{split} \partial_\xi[\frac{\phi_{n+1}^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]]&=- 3\phi_{n+1}^3\partial_\xi [\frac{\phi_{n+1}^2}{\xi^2}\partial_\xi\phi_{n+1}] \frac{\partial_t\phi_{n+1}}{\phi_{n+1}}+\frac{\phi_{n+1}^3}{3}\{4A-3B-C\}\\ &=\phi_{n+1}^3\frac{\xi^2}{\phi_n^2}\frac{V_{n}^\ast D_{n+1}}{\xi} + \frac{\phi_{n+1}^3}{3}\{4A-3B-C\} \end{split}\label{dddt} \end{equation} Indeed, $A,B,C$ were treated during the estimates of $||\frac{\partial_t\phi_{n+1}}{\xi}||_{L^\infty_\xi}$. From the same analysis, we deduce that $L^2_\xi$ norms of $A,B,C$ are bounded by $\widetilde{\mathcal{E}}_{n+1},\,\widetilde{\mathcal{E}}_{n}$. Thus the claim \eqref{claim} follows. Now we consider $V_{n+1}(\frac{\phi_{n+1}^5}{\xi^3} |\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]|^2)$. \[ \begin{split} & V_{n+1}(\frac{\phi_{n+1}^5}{\xi^3} |\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]|^2)=\frac{1}{\xi}\partial_\xi [\frac{1}{\phi_{n+1}^5}|\frac{\phi_{n+1}^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]|^2]\\ &=2\phi_{n+1}\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]\cdot \frac{1}{\xi^3}\partial_\xi[\frac{\phi_{n+1}^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]] -5 \frac{\partial_\xi\phi_{n+1}}{\xi\phi_{n+1}^6} |\frac{\phi_{n+1}^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]|^2 \end{split} \] By \eqref{claim}, the first term in the right hand side is controllable, and again due to \eqref{claim}, since $|\frac{\phi_{n+1}^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]|^2\leq\xi^7 ||\frac{1}{\xi^3}\partial_\xi[\frac{\phi_{n+1}^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]]||_{L^2_\xi}^2$, we obtain \[ ||\frac{1}{\xi^7}|\frac{\phi_{n+1}^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]|^2||_{L^2_\xi}\leq ||\frac{1}{\xi^3}\partial_\xi[\frac{\phi_{n+1}^6}{\xi^2}\partial_\xi [\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}]]||_{L^2_\xi}^2 \] and this completes the estimate. For the last term in $J_{n+1}^2$, note that \[ \begin{split} &V_{n+1}(\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}\frac{1}{\xi\phi_{n+1}} \partial_\xi[\frac{\phi_{n+1}^6}{\xi^2}\partial_\xi[\frac{\partial_t\phi_{n+1}} {\phi_{n+1}}]])\\ &=\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}\frac{1}{\xi}\partial_\xi [\frac{\phi_{n+1}}{\xi^2}\partial_\xi[\frac{\phi_{n+1}^6}{\xi^2} \partial_\xi[\frac{\partial_t\phi_{n+1}} {\phi_{n+1}}]]]+\phi_{n+1}\partial_\xi[\frac{\partial_t\phi_{n+1}}{\phi_{n+1}}] \frac{1}{\xi^3}\partial_\xi[\frac{\phi_{n+1}^6}{\xi^2} \partial_\xi[\frac{\partial_t\phi_{n+1}} {\phi_{n+1}}]] \end{split} \] Thus it remains to show that $\frac{1}{\xi}\partial_\xi [\frac{\phi_{n+1}}{\xi^2}\partial_\xi[\frac{\phi_{n+1}^6}{\xi^2} \partial_\xi[\frac{\partial_t\phi_{n+1}} {\phi_{n+1}}]]]$ is bounded by $\widetilde{\mathcal{E}}_{n+1},\,\widetilde{\mathcal{E}}_{n}$. From \eqref{dddt}, one can write it as \[ \begin{split} \frac{1}{\xi}\partial_\xi [\frac{\phi_{n+1}}{\xi^2}\partial_\xi[\frac{\phi_{n+1}^6}{\xi^2} \partial_\xi[\frac{\partial_t\phi_{n+1}} {\phi_{n+1}}]]]&=\frac{1}{\xi}\partial_\xi[\frac{\phi_{n+1}^4}{\phi_n^4}\cdot \frac{\phi_n^2}{\xi}V_n^\ast D_{n+1}]+\frac{1}{3\xi}\partial_\xi[\frac{\phi_{n+1}^4}{\xi^2} (4A-3B-C)]\\ &\equiv (I)+(II) \end{split} \] $(I)$ can be decomposed as follows: \[ (I)=\frac{\phi_{n+1}^4}{\phi_n^4}G_{n+1}+ \xi\partial_\xi[\frac{\phi_{n+1}^4}{\phi_n^4}]\frac{\phi_n^2}{\xi^2} \frac{V_n^\ast D_{n+1}}{\xi} \] hence it is bounded. For $(II)$, we put $A,B,C$ back into the expression: \[ \begin{split} 3\,(II)&=\frac{1}{\xi}\partial_\xi[\frac{\phi_{n+1}^4}{\phi_n^4}\{4 \frac{\partial_t\phi_n}{\phi_n}\int_0^\xi\xi_1G_{n+1}d\xi_1-3 \frac{\phi_n^2}{\xi}F_{n+1}-\int_0^\xi \xi_1J_n^1d\xi_1\}]\\ &=\frac{\phi_{n+1}^4}{\phi_n^4}\{4\frac{\partial_t\phi_n}{\phi_n} G_{n+1}+\xi\partial_\xi[\frac{\partial_t\phi_n}{\phi_n}]\cdot\frac{1}{\xi^2} \int_0^\xi\xi_1G_{n+1}d\xi_1-3V_nF_{n+1}-J_n^1 \}\\&\;\;+\xi\partial_\xi[\frac{\phi_{n+1}^4}{\phi_n^4}] \{4\frac{\partial_t\phi_n}{\phi_n}\frac{1}{\xi^2} \int_0^\xi\xi_1G_{n+1}d\xi_1-3\frac{\phi_n^2}{\xi^2}\frac{F_{n+1}}{\xi} -\frac{1}{\xi^2}\int_0^\xi \xi_1J_n^1d\xi_1\} \end{split} \] Therefore, by the estimates \eqref{g1} and \eqref{g2} of $\int_0^\xi\xi_1G_{n+1}d\xi_1$ and $\int_0^\xi \xi_1J_n^1d\xi_1$, we conclude that $(II)$ is also bounded by $\widetilde{\mathcal{E}}_{n+1},\,\widetilde{\mathcal{E}}_{n}$. This finishes the proof of Claim. \end{proof} \section{Proof of Theorem \ref{thm}}\label{6} In order to prove Theorem \ref{thm}, it now remains to show that $\phi_n,\,u_n$ converge, the limit functions solve Euler equations \eqref{euler}, and they are unique. \subsection{Existence} First, by applying Gronwall inequality to the energy inequality in Proposition \ref{EE}, we can deduce the following claim. \begin{claim}\label{claim2} Suppose that the initial data $\phi_{0}(\xi)$ and $u_{0}(\xi)$ of the Euler equations \eqref{euler} be given such that $\frac{1}{C_0}\leq \frac{\phi_{0}}{\xi}\leq C_0$ for a constant $C_0>1$, and $\mathcal{E}^{k,\lceil k\rceil+3}(\phi_{0},u_{0})\leq A$ for a constant $A>0$. Then there exists $T>0$ such that if for $m\leq n$, $\widetilde{\mathcal{E}}^k_{m}\leq \frac{3}{2}A$ for $t\leq T$, then $\widetilde{\mathcal{E}}^k_{n+1}\leq \frac{3}{2}A$ for $t\leq T$ and in addition, for all $n$, $\frac{1}{2C_0}\leq \frac{\phi_{n}}{\xi}\leq 2C_0$ for $t\leq T$. \end{claim} Thus we get the uniform bound of $\widetilde{\mathcal{E}}^k_{n}$'s as well as the uniform bound of the upper and lower bounds of $\frac{\phi_n}{\xi}$. Since the approximate energy functionals \eqref{aef} depend on not only the approximate functions $G_{n+1},\,F_{n+1}$ but also $\phi_n$, the corresponding Banach space changes at every step. In order to take the limit, it is desirable to have the fixed space where $G_{n+1},\,F_{n+1}$ live. The plan is as follows: by making use of Proposition \ref{equiv e}, we prove the equivalence between our energy functionals and the energy functional induced by \eqref{ho} so that approximate functions have the uniform energy bounds in the Banach space induced by \eqref{ho} and thus we can apply the fixed point theorem. We present the detailed analysis for $k=1$. Recall the homogeneous operators $\overline{V}$ and $\overline{V}^\ast$: \begin{equation*} \begin{split} \overline{V}(f)\equiv \frac{1}{\xi}\partial_\xi [\xi f],\;\;\; \overline{V}^\ast(g)\equiv-\xi\partial_\xi[ \frac{g}{\xi}] \end{split} \end{equation*} and we define the corresponding energy functional $\overline{\mathcal{E}}_{n+1}$: \begin{equation} \overline{\mathcal{E}}_{n+1}(t)\equiv \sum_{i=0}^1\int \frac{1}{9}|(\overline{V}^\ast)^iG_{n+1}|^2 +|(\overline{V})^iF_{n+1}|^2d\xi \end{equation} We claim that $\widetilde{\mathcal{E}}_{n+1}$ and $\overline{\mathcal{E}}_{n+1}$ are equivalent in some sense. We compute $\widetilde{\mathcal{E}}_{n+1}$. $V_n$ and $V_n^\ast$ are written in terms of $\overline{V}$ and $\overline{V}^\ast$. \[ \begin{split} \bullet &\;V_n^\ast G_{n+1}=\frac{\phi_n^2}{\xi^2}\overline{V}^\ast G_{n+1}\\ \bullet &\;V_nF_{n+1}=\frac{\phi_n^2}{\xi^2}\overline{V}F_{n+1}+ 2(\frac{\phi_n}{\xi}\partial_\xi\phi_n-\frac{\phi_n^2}{\xi^2}) \frac{F_{n+1}}{\xi} \end{split} \] Note that from the definition of $D_n$ \[ \begin{split} |\frac{\phi_n^2}{\xi^2}\partial_\xi\phi_n|= |\frac{1}{3}\frac{D_n}{\xi}|\leq ||\frac{\phi_{n-1}^2}{\xi^2}\partial_\xi\phi_{n-1}||_{L^\infty_\xi} +\frac{2}{3^{\frac{3}{2}}}\xi^{\frac{1}{2}} C_{n-1}^4||G_n||_{L^2_\xi}\leq I+ C_{n-1}^4||G_n||_{L^2_\xi} \end{split} \] where $I=||\frac{\phi_{0}^2}{\xi^2}\partial_\xi\phi_{0}||_{L^\infty_\xi}$ and $C_{n-1}$ is the bound of $\frac{\phi_{n-1}}{\xi}$ as in \eqref{n}. Therefore, we deduce that \[ \widetilde{\mathcal{E}}_{n+1}\leq (1+M_n)\overline{\mathcal{E}}_{n+1} \] where $M_n\equiv 5 C_n^2\{C_n^2+I^2+C_{n-1}^8\overline{\mathcal{E}}_n\}$. To show the the converse, namely $\overline{\mathcal{E}}_{n+1}$ is bounded by $\widetilde{\mathcal{E}}_{n+1}$, we rewrite $\overline{V}$ and $\overline{V}^\ast$ in terms of $V^n$ and $V_n^\ast$. \[ \begin{split} \bullet &\;\overline{V}^\ast G_{n+1}=\frac{\xi^2}{\phi_n^2}V_n^\ast G_{n+1}\\ \bullet &\;\overline{V}F_{n+1}=\frac{\xi^2}{\phi_n^2}V_nF_{n+1}+ 2(1-\frac{\xi}{\phi_n}\partial_\xi\phi_n) \frac{F_{n+1}}{\xi} \end{split} \] Thus we reach the same conclusion: \[ \overline{\mathcal{E}}_{n+1}\leq (1+M_n)\widetilde{\mathcal{E}}_{n+1} \] Note that $M_n$'s have the uniform bound over $t\leq T$ by Claim \ref{claim2}. Therefore, there exists a sequence $n_l$ so that $G_{n_l},$ $F_{n_l},$ $\phi_{n_l},$ $u_{n_l}$ converge strongly to some $G,\;F,\;\phi,\;u$. Due to the uniform energy bound, we also conclude that $G,\;F$ solve the equations \eqref{i=2j+1} for $j=1$ and moreover $\phi,\;u$ solve Euler equations \eqref{euler} with the desired properties. Next, we turn to the general case. Back to the approximate system \eqref{FG}, we define the corresponding homogeneous energy functional: \begin{equation} \overline{\mathcal{E}}_{n+1}^k(t)\equiv ||\frac{1}{2k+1}G_{n+1}||^2_{\overline{Y}^{k,\lceil k\rceil}} +||F_{n+1}||^2_{\overline{X}^{k,\lceil k\rceil}} \label{lefk} \end{equation} From Proposition \ref{equiv e}, one can derive the equivalence of the associated energy functional \eqref{lefk} and the original approximate functional \eqref{aef}: There exists $M_n>0$ only depending on initial data, $C_n$, $C_{n-1}$, and $\overline{\mathcal{E}}_n$ such that \[ \frac{1}{1+M_n}\overline{\mathcal{E}}_{n+1}^k\leq \widetilde{\mathcal{E}}_{n+1}^k \leq (1+ M_n) \overline{\mathcal{E}}_{n+1}^k. \] By the same reasoning as the case when $k=1$, the existence of $G,\;F,\;\phi,\;u$ follows. \subsection{Uniqueness} In order to prove Theorem \ref{thm}, it only remains to prove the uniqueness. Let $(\phi_1,u_1)$ and $(\phi_2,u_2)$ be two regular solutions to \eqref{euler} with the same initial boundary conditions with $\mathcal{E}^{k,\lceil k\rceil+3}(\phi_1,u_1),\,\mathcal{E}^{k,\lceil k\rceil+3}(\phi_2,u_2)\leq 2A$. Define $\mathcal{D}(t)$ by \[ \begin{split} \mathcal{D}(t)\equiv& \int \frac{1}{2k+1}|\xi^k(\phi_1-\phi_2)|^2 +|\xi^k(u_1-u_2)|^2d\xi \\+&\sum_{i=1}^2\int \frac{1}{(2k+1)^2}|(V_1)^i(\xi^k\phi_1)-(V_2)^i(\xi^k\phi_2)|^2 +|(V_1^\ast)^i(\xi^ku_1)-(V_2^\ast)^i(\xi^ku_2)|^2d\xi\,, \end{split} \] where $V_j,V_j^\ast$ are the $V,V^\ast$ induced by $\phi_j$. We will prove that $\frac{d}{dt}\mathcal{D}\leq \mathcal{C}\mathcal{D}$, which immediately gives the uniqueness. Let us consider $i=2$ case only in $\mathcal{D}(t)$. Recall the system \eqref{i=2}. By subtracting two systems from each other, we obtain the equations for $(V_1^\ast V_1(\xi^k\phi_1)-V_2^\ast V_2(\xi^k\phi_2),\,V_1V_1^\ast (\xi^ku_1)-V_2V_2^\ast(\xi^ku_2))$: \[ \begin{split} &\partial_t\{V_1^\ast V_1(\xi^k\phi_1)-V_2^\ast V_2(\xi^k\phi_2)\}-(2k+1)V_1^\ast \{V_1V_1^\ast (\xi^ku_1)-V_2V_2^\ast(\xi^ku_2)\}\\ &=(2k+1)\{V_1^\ast V_2V_2^\ast(\xi^ku_2)-V_2^\ast V_2V_2^\ast(\xi^ku_2)\}+\{(V_1^\ast)_tV_1(\xi^k\phi_1)- (V_2^\ast)_tV_2(\xi^k\phi_2)\} \\ &\partial_t\{V_1V_1^\ast (\xi^ku_1)-V_2V_2^\ast(\xi^ku_2)\}+\frac{1}{2k+1}V_1 \{V_1^\ast V_1(\xi^k\phi_1)-V_2^\ast V_2(\xi^k\phi_2)\}\\ &=\frac{1}{2k+1}\{\underbrace{V_2V_2^\ast V_2(\xi^k\phi_2)-V_1V_2^\ast V_2(\xi^k\phi_2)}_{(I)}\}+2\{\underbrace{(V_1)_tV_1^\ast(\xi^ku_1)- (V_2)_tV_2^\ast(\xi^ku_2)}_{(II)}\} \end{split} \] We have to show that the $L^2_\xi$ norm of the right hand sides is bounded by $\mathcal{D}^{\frac{1}{2}}$. We estimate $(I)$ and $(II)$. Other two terms can be treated in the same way. First, we note that \[ \phi_1^{2k+1}-\phi_2^{2k+1}=\int_0^\xi(\xi')^k\{V_1(\xi^k\phi_1) -V_2(\xi^k\phi_2)\} d\xi'\,. \] If simply applying H$\ddot{\text{o}}$lder inequality, one gets \begin{equation}\label{b1} ||\frac{\phi_1^{2k+1}}{\xi^{k+\frac{1}{2}}}- \frac{\phi_2^{2k+1}}{\xi^{k+\frac{1}{2}}}||_{L^\infty_\xi}\leq ||V_1(\xi^k\phi_1) -V_2(\xi^k\phi_2)||_{L^2_\xi}\,. \end{equation} Apply H$\ddot{\text{o}}$lder inequality once more: \[ \begin{split} \int_0^\xi(\xi')^{k+\frac{1}{2}}\frac{V_1(\xi^k\phi_1) -V_2(\xi^k\phi_2)}{\sqrt{\xi'}} d\xi'\leq \xi^{k+1}&\{ ||V_1(\xi^k\phi_1) -V_2(\xi^k\phi_2)||_{L^2_\xi}\\ &+\underbrace{||\sqrt{\xi}V_1^\ast(V_1(\xi^k\phi_1) -V_2(\xi^k\phi_2))||_{L^2_\xi}}_{(\ast\ast)}\} \end{split} \] Note that it is not trivial to get the boundedness of $(\ast\ast)$ in terms of $\mathcal{D}^{\frac{1}{2}}$, since it is not of the right form yet. Here is the estimate of $(\ast\ast)$: \[ (\ast\ast)\leq ||\sqrt{\xi}\{V_1^\ast V_1(\xi^k\phi_1)-V_2^\ast V_2(\xi^k\phi_2)\}||_{L^2_\xi}+||\sqrt{\xi}\{V_2^\ast V_2(\xi^k\phi_2)-V_1^\ast V_2(\xi^k\phi_2)\}||_{L^2_\xi} \] The second term can be written as \[ (\frac{\phi_1^{2k}}{\xi^{k-\frac{1}{2}}}-\frac{\phi_2^{2k}}{\xi^{k-\frac{1}{2}}}) \partial_\xi[\frac{1}{\xi^k}V_2(\xi^k\phi_2)]=- (\frac{\phi_1^{2k}}{\xi^{k-\frac{1}{2}}}-\frac{\phi_2^{2k}}{\xi^{k-\frac{1}{2}}}) \frac{\xi^{2k}}{\phi_2^{2k}} \frac{V_2^\ast V_2(\xi^k\phi_2)}{\xi^k}\,. \] Hence, by using \eqref{b1}, we deduce that \begin{equation}\label{b2} ||\frac{\phi_1^{2k+1}}{\xi^{k+1}}-\frac{\phi_2^{2k+1}}{\xi^{k+1}}||_{L^\infty_\xi} \sim ||\frac{\phi_1^{2k}}{\xi^k}-\frac{\phi_2^{2k}}{\xi^k}||_{L^\infty_\xi}\leq C\mathcal{D}^{\frac{1}{2}}\,. \end{equation} Note that one cannot hope to get the bound of $\frac{\phi_1^{2k+1}}{\xi^{2k+1}}-\frac{\phi_2^{2k+1}}{\xi^{2k+1}}$ with the $\mathcal{D}$-regularity. Of course, it is bounded by $A$, but for the purpose of uniqueness, $A$-bound is not useful for the difference terms. The idea is to rearrange $(I)$ and $(II)$ so that $\mathcal{D}^{\frac{1}{2}}$ can be extracted from each term. We write the $L_\xi^\infty$ factors first and then $L_\xi^2$ factor at the last. $\mathcal{D}^{\frac{1}{2}}$ can also come from $L_\xi^\infty$ factor thanks to the above estimate \eqref{b2}. We start with $(I)$: \[ \begin{split} (I)=(1-\frac{\phi_1^{2k}}{\phi_2^{2k}})V_2V_2^\ast V_2(\xi^k\phi_2)-\underline{\frac{\phi_2^{2k}}{\xi^{2k}} \partial_\xi[\frac{\phi_1^{2k}}{\phi_2^{2k}}]V_2^\ast V_2(\xi^k\phi_2)}_{(\star)} \end{split} \] The first term is written as $(\frac{\phi_1^{2k}}{\xi^k}-\frac{\phi_2^{2k}}{\xi^k})\frac{\xi^{2k}}{\phi_2^{2k}} \frac{V_2V_2^\ast V_2(\xi^k\phi_2)}{\xi^k}$ and thus its $L^2_\xi$ norm is bounded by $A$ and $\mathcal{D}^{\frac{1}{2}}$. The second term can be treated as follows: \[ \begin{split} (\star)&=2k \frac{\xi}{\phi_1}[\frac{\phi_1^{2k}}{\xi^k}\partial_\xi\phi_1 -\frac{\phi_1^{2k}}{\xi^k}\frac{\phi_1}{\phi_2}\partial_\xi\phi_2]\frac{V_2^\ast V_2(\xi^k\phi_2)}{\xi^{k+1}}\\ &=\frac{2k}{2k+1} \frac{\xi}{\phi_1}\frac{V_2^\ast V_2(\xi^k\phi_2)}{\xi^{k}}\frac{V_1(\xi^k\phi_1)-V_2(\xi^k\phi_2)}{\xi}+ 2k \frac{\xi}{\phi_1}\partial_\xi\phi_2 (\frac{\phi_2^{2k}}{\xi^k}-\frac{\phi_1^{2k}}{\xi^k}\frac{\phi_1}{\phi_2}) \frac{V_2^\ast V_2(\xi^k\phi_2)}{\xi^{k+1}} \end{split} \] Thus $(\star)$ is controlled by $A$ and $\mathcal{D}^{\frac{1}{2}}$. Next we rearrange $(II)$: \[ \begin{split} &\frac{1}{2k}(II)=\frac{\partial_t\phi_1}{\phi_1}\{V_1V_1^\ast(\xi^ku_1) -V_2V_2^\ast(\xi^ku_2)\}+\underline{\frac{V_2V_2^\ast(\xi^ku_2)}{\xi^k} \{\frac{\xi^k\partial_t\phi_1}{\phi_1}- \frac{\xi^k\partial_t\phi_2}{\phi_2}\}}_{(\star\star)}\\ &\:+\frac{\phi_1^{2k}}{\xi^{2k}}\frac{V_1^\ast(\xi^ku_1)}{\xi^{k+1}} \{\xi^{k+1}\partial_\xi[\frac{\partial_t\phi_1}{\phi_1}] -\xi^{k+1}\partial_\xi[\frac{\partial_t\phi_2}{\phi_2}] \} +\xi\partial_\xi[\frac{\partial_t\phi_2}{\phi_2}] \frac{\phi_1^{2k}}{\xi^{2k}}\{\frac{V_1^\ast(\xi^ku_1)}{\xi} -\frac{V_2^\ast(\xi^ku_2)}{\xi}\}\\ &\:+\xi\partial_\xi[\frac{\partial_t\phi_2}{\phi_2}] (\frac{\phi_1^{2k}}{\xi^k}-\frac{\phi_2^{2k}}{\xi^k}) \frac{V_2^\ast(\xi^ku_2)}{\xi^{k+1}} \end{split} \] For $t$ derivative difference terms, use the equation to convert into $u$ terms and apply the same argument. For instance, the second term can be rewritten as \[ \begin{split} (\star\star)&=\frac{V_2V_2^\ast(\xi^ku_2)}{\xi^k}\{\frac{V_1^\ast(\xi^ku_1)}{\phi_1} -\frac{V_2^\ast(\xi^ku_2)}{\phi_2}\}\\ &=\frac{V_2V_2^\ast(\xi^ku_2)}{\xi^k}\frac{V_1^\ast(\xi^ku_1) -V_2^\ast(\xi^ku_2)}{\phi_1}+ \frac{V_2V_2^\ast(\xi^ku_2)}{\xi^k}\frac{V_2^\ast(\xi^ku_2)}{\xi^{k+1}} \{\frac{\xi^{k+1}}{\phi_1}-\frac{\xi^{k+1}}{\phi_2}\}\,. \end{split} \] It is easy to deduce that $(\star\star)$ bounded by $A$ and $\mathcal{D}^{\frac{1}{2}}$. Other terms can be similarly estimated. This finishes the proof of the uniqueness and Theorem \ref{thm}. \section{Duality argument}\label{7} Here we would like to prove the existence for the linear problem \eqref{FG}. This is a consequence of the following proposition. \begin{proposition} For $f$ and $g$ in $L^1(0,T; L^2)$, there exists a unique solution $(F,G) $ to the linear system \begin{equation}\label{FG1} \left\{ \begin{split} \partial_tF - V^\ast G &=f \\ \partial_tG + V F &=g \\ G(\xi=1) & =0, F(t=0) = G(t=0) &= 0 \end{split} \right. \end{equation} on $(0,T)$ which satisfies \begin{equation}\label{dual-L2} \| (F,G) \|_{C([0,T] ; L^2) } \leq C \| (f,g) \|_{L^1 L^2 }\,. \end{equation} Moreover, if $(f,g) \in L^1(0,T; X^{k,j} \times Y^{k, j}) $ for some integer $j \leq \lceil k\rceil $ and for $0\leq 2i \leq j-1 $, we have $ (V^\ast)^{2i} g = 0 $ at $\xi = 1$ and for $1\leq 2i + 1 \leq j-1 $, we have $ (V)^{2i+1} f = 0 $ at $\xi = 1$, then \begin{equation} \label{dual-reg} \| (F,G) \|_{C([0,T] ; X^{k,j} \times Y^{k, j} ) } \leq C \| (f,g) \|_{L^1 (X^{k,j} \times Y^{k, j}) } \end{equation} for some constant $C$ which only depends on $\| \xi^k \phi \|_{X^{k,\lceil k\rceil + 3 }} $. \end{proposition} The proof is based on a duality argument. Let $\mathcal A$ denote the set \[ \mathcal A = \{ {\phi \choose \psi} \in C^\infty((0,\infty) \times (0,1]), \quad \hbox{such that}, \ \psi(\xi=1)=0, \ (\phi,\psi)_{t=T} = 0 \} . \] Hence, $(F,G ) $ solves \eqref{FG1} on a time interval $(0,T)$ if and only if for each test function $(\phi, \psi) \in \mathcal A$, we have \begin{equation} \label{FG-weak} \begin{split} \int_0^T \int -F \partial_t \phi - G V \phi &= \int_0^T \int f \phi \\ \int_0^T \int -G \partial_t \psi + F V^\ast \psi &= \int_0^T \int g \psi. \end{split} \end{equation} We denote $\L {F \choose G} = {\partial_t F -V^\ast G \choose \partial_t G + V F} $ defined on the core $$ \{ {F \choose G}, \ | \partial_t {F \choose G} \in L^2_tL^2_\xi, \ (F,G) \in L^2_t (\mathcal{D}(V),\mathcal{D}(V^\ast) ) \} . $$ Hence, $\L$ can be extended in a unique way to a closed operator. Moreover, $\mathcal A \subset \mathcal{D} \mathcal(\L^\ast)$, the dual of $\L$ and $ \L^\ast {\phi \choose \psi} = { -\partial_t \phi + V^\ast \psi \choose - \partial_t \psi - V \phi} $. Hence, \eqref{FG-weak} holds for each $(\phi, \psi) \in \mathcal A$ if and only if for each $(\phi, \psi) \in A$, we have \begin{equation} \label{FG-dual} \int_0^T \int {F \choose G} . \L^\ast {\phi \choose \psi} = \int_0^T \int {f \choose g} . {\phi \choose \psi}. \end{equation} We take ${\phi \choose \psi} \in \mathcal A$ and denote $\L^\ast {\phi \choose \psi} = {\Phi \choose \Psi} $. The energy estimate written for $ \L^\ast $ yields that \[ \sup_{0\leq t \leq T} \| \phi \|_{L^2}^2 + \| \psi \|_{L^2}^2 \leq C \int_0^T \| \Phi \|_{L^2}^2 + \| \Psi \|_{L^2}^2. \] Hence, the operator $\L^\ast $ defines a bijection between $\mathcal A$ and $\L^\ast(\mathcal A)$. Let $S_0$ be the inverse. Hence \begin{equation*} \begin{array}{cccc} S_0 :& \L^\ast(\mathcal A) & \to & A \\ & {\Phi \choose \Psi} & \to & {\Phi \choose \Psi} \end{array} \end{equation*} and we have \begin{equation*} \| S_0 {\Phi \choose \Psi} \|_{C([0,T]; L^2)} \leq C \|{\phi \choose \psi} \|_{L^1 L^2}. \end{equation*} We extend this operator by density to $\overline{\L^\ast(A )}^{L^1 L^2}$ and to $L^1 L^2$ by Hahn-Banach. We denote this extension by $S$. Hence \begin{equation*} \begin{array}{cccc} S :& L^1 L^2 & \to & C([0,T]; L^2) \\ & {\Phi \choose \Psi} & \to & {\phi \choose \psi} \end{array} \end{equation*} Now, we want to solve \eqref{FG1}, namely $ \L {F \choose G} = {f \choose g} $ with $ {F \choose G}_{t=0} = 0 $. This is, of course, equivalent to the fact that \eqref{FG-dual} holds for each $(\phi, \psi) \in \mathcal A$. Hence it is enough that for all $ {\Phi \choose \Psi} \in L^1 L^2, $ we have \begin{equation} \label{FG-dual-Phi} \int_0^T \int {F \choose G} .{\Phi \choose \Psi} = \int_0^T \int {f \choose g} . S {\Phi \choose \Psi} . \end{equation} Therefore, it is enough to take $ {F \choose G} = S^\ast {f \choose g} $ where $S^\ast $ is the dual of $S$ which satisfies \begin{equation*} \begin{array}{cccc} S^\ast :& \mathcal{M}(0,T; L^2) & \to & L^\infty(0,T; L^2). \\ \end{array} \end{equation*} In particular it maps $L^1L^2$ into $L^\infty L^2$. Hence \eqref{dual-L2} holds with $C([0,T] ; L^2)$ replaced by $L^\infty ([0,T] ; L^2) $. At this stage we do not know whether $ {F \choose G} $ is continuous. This will actually follow from the regularity. The uniqueness of $ {F \choose G} $ follows from the fact that if $f=g=0$ in \eqref{FG1}, then ${0 \choose 0}$ is the unique solution to \eqref{FG1} in $L^\infty ([0,T] ; L^2) $. To prove this consider a solution $ {F \choose G} \in L^\infty ([0,T] ; L^2) $ to \eqref{FG1} with $f=g=0$. We will also use the duality argument. Indeed, arguing as above and changing the roles of $\L$ and $\L^\ast$, we can prove the existence of a solution $ {\phi \choose \psi} \in L^\infty ([0,T] ; L^2) $ to the dual problem \begin{equation}\label{FG1-dual} \left\{ \begin{split} - \partial_t \phi + V^\ast \psi &= \Phi \\ - \partial_t\psi - V \phi &= \Psi \\ \psi(\xi=1) & =0, \\ \phi(t=T) = \psi(t=0) &= 0 . \end{split} \right. \end{equation} for each $ {\Phi \choose \Psi} \in L^1 L^2. $ Then, for each $ {\Phi \choose \Psi} \in L^1 L^2 $, we consider $ {\phi \choose \psi} $ a solution to \eqref{FG1-dual}. Hence, we can write \eqref{FG-dual} with the solution $ {\phi \choose \psi} $. This yields \begin{equation} \label{FG-unique} \int_0^T \int {F \choose G} . {\Phi \choose \Psi} = 0, \end{equation} which gives rise to the fact that $ F=G=0 $. To prove \eqref{dual-reg}, we can argue by induction of $j$. We start with the the case $j = 1$ and we first argue formally. Applying $V$ and $V^\ast$ to \eqref{FG1}, we get \begin{equation}\label{FG1-V} \left\{ \begin{split} \partial_t V^\ast G + V^\ast V F &= V^\ast g + V_t^\ast G \\ \partial_t VF - V V^\ast G &= V f + V_t F \\ V F (\xi=1) & =0, \\ V F(t=0) = V^\ast G(t=0) &= 0 . \end{split} \right. \end{equation} Notice that the boundary condition $V F (\xi=1) =0, $ comes from the fact that $g=0$ at $\xi=1$. Hence, we deduce formally that \begin{equation*} \| (V F, V^\ast G) \|_{L^\infty ([0,T] ; L^2) } \leq C \| (V^\ast g + V_t^\ast G , Vf +V_t F ) \|_{L^1 L^2 }\,. \end{equation*} To make this rigorous, we define first ${Y_0 \choose Z_0 } $, the solution of \eqref{FG1-V} with the right hand side replaced by $ { V^\ast g \choose Vf } $. Hence, $ \L {Y_0 \choose Z_0 } = { V^\ast g \choose Vf } $. Then, we define for each integer $i$, ${Y_i \choose Z_i } $, the solution of \eqref{FG1-V} with the right hand side replaced by ${ V_t^\ast V^{\ast-1} Y_{i-1} \choose V_t V^{-1} Z_{i-1} } = { \frac{\phi_t}{\phi} Y_{i-1} \choose V_t V^{-1} Z_{i-1} } $. Hence, \begin{equation*} \| (Y_0, Z_0) \|_{L^\infty ([0,T] ; L^2) } \leq C \| (V^\ast g , Vf ) \|_{L^1 L^2 } \end{equation*} and using the fact that $V_t V^{-1}$ is bounded from $L^2$ to $L^2$, we get that \begin{equation*} \begin{split} \| (Y_i, Z_i) \|_{L^\infty ([0,T] ; L^2) } \leq C \| ( Y_{i-1}, Z_{i-1} ) \|_{L^1 L^2 } & \leq CT \| ( Y_{i-1}, Z_{i-1} ) \|_{L^\infty L^2 } \\ & \leq C (CT)^i \| ( V^\ast g , Vf ) \|_{L^\infty L^2 } \,. \end{split} \end{equation*} Hence, denoting $$ {Y \choose Z} = \sum_{i=0}^\infty {Y_i \choose Z_i}, $$ we see that ${Y \choose Z} $ solves \begin{equation}\label{FG1-YZ} \left\{ \begin{split} \partial_t Y + V^\ast Z &= V^\ast g + V_t^\ast V^{\ast-1} Y \\ \partial_t Z - V Y &= V f + V_t V^{-1} Z \\ Z (\xi=1) & =0, \\ Z (t=0) = Y (t=0) &= 0 . \end{split} \right. \end{equation} The operator $V$ can be thought of as an operator from $\mathcal{D}(V) \to L^2_\xi$. It has a transpose $^tV$ which goes from $L^2_\xi \to \mathcal{D}(V)' $ where $\mathcal{D}(V)' $ is the dual space of $\mathcal{D}(V) $. It satisfies : for $f\in \mathcal{D}(V)$ and $u \in L^2$, \begin{equation*} (V(f), u)_{L^2, L^2} = (f,^tV(u) )_{\mathcal{D}(V), \mathcal{D}(V)' }\,. \end{equation*} Moreover, if we identify $L^2$ to a subspace of $\mathcal{D}(V)' $, then $^tV$ extends $V^\ast$ to $L^2$. The same argument shows that we can also extend $V$ to $L^2$ by considering $^tV^\ast$. Hence, one can also interpret the system \eqref{FG1} as equalities in $\mathcal{V}'$ and $\mathcal{V^\ast}'$ and the same remark holds for \eqref{FG1-YZ}. Using that $V[\partial_t (V^{-1} Z ) ] = \partial_t Z - V_t V^{-1} Z $, we deduce that $V [\partial_t (V^{-1} Z ) -V Y - V f ] $. Since, the kernel of $V$ is $\{0\}$, we deduce that \begin{equation*} \partial_t (V^{-1} Z ) - Y = f\,. \end{equation*} We also have $V^\ast[\partial_t (V^{\ast-1} Y ) ] = \partial_t Y - V^\ast_t V^{\ast-1} Y$. Hence, the first equation of \eqref{FG1-YZ} can be written $$ V^\ast[\partial_t (V^{\ast-1} Y ) + Z - g ] = 0 \,. $$ Hence since the kernel of $V^\ast$ is $\{0\}$ when we add the vanishing of the boundary condition at $\xi=1$, we deduce that \begin{equation*} \partial_t (V^{\ast-1} Y ) + Z - g = 0\,. \end{equation*} By uniqueness for \eqref{FG1}, we deduce that $F = V^{-1} Z $ and $G = V^{\ast-1} Y $. Hence, we get that $(F,G ) \in L^\infty(0,T; X^{k,1} \times Y^{k,1}) $. In particular, this also shows that $(\partial_t F, \partial_t G \in L^1(0,T; L^2) $. Hence, integrating in time, we deduce that $(F,G) \in C([0,T); L^2)$. In particular this shows that \eqref{dual-L2} holds if have more regularity on $(f,g)$. By a density argument, this shows that \eqref{dual-L2} holds even if we only know that $(f,g) \in L^1L^2$. Arguing by induction on $j$, we prove that if $(f,g) \in L^1(0,T; X^{k,j} \times Y^{k,j}) $, then $(F,G) \in C([0,T] ; X^{k,j} \times Y^{k,j}) $. \section{Acknowledgement} N. M was partially supported by an NSF grant DMS-0703145. J. J was supported by an NSF grant DMS-0635607.
0806.0946
\section{Introduction} Here I discuss the general (physical) scheme of the series of event generators DY\_AB, concentrating specifically on the last version of this code DY\_AB5. The event generator DY\_AB5 is a generator of dilepton Drell-Yan events\cite{DrellYan70} in hadron-hadron, hadron-nucleus and hadron-(partly polarized molecular target) collisions. It is aimed at fast preliminary simulation of that subset of Drell-Yan experiments, where (i) the center of mass energy is ``intermediate'' (from a few to some tenths GeV); (ii) the projectile may be any light hadronic species (charged pion, proton, antiproton), possibly polarized; (iii) the target is in general a molecular species, with partial normal polarization of some of its component nuclei; (iv) the final leptons present azimuthal asymmetries and these asymmetries are the goal of the measurement. Several experimental proposals have been presented or are in preparation in this field\cite{panda,assia,pax,compassDY,rhic2}. The main difficulty of such experiments is the need to select regions of the overall phase space where the event rates are small (in particular: transverse momentum over 2 GeV/c), and where two event numbers (e.g.: before/after reversing spin) must be compared to identify small asymmetries. It is essential to understand from the very beginning which overall event numbers are needed to reach a satisfactory population of the interesting subregions. The here discussed code is aimed at such ``preliminary'' investigations, for experimental planning only. Since it is based on strong phenomenological components, it is not suitable as it is for theoretical analysis at quark-parton level. Up to date, five (private) versions of this code, named DY\_AB1, 2, 3, 4, and 5, have been used and tested by the author and by other users both for phenomenological publications\cite{BRMCa,BRMCb,BRMCc,BRMCd,BR_JPG2,AB_06} and for exploratory simulations aimed at experimental proposals\cite{panda,assia,compassDY} (see e.g. \cite{panda_note}, \cite{maggiora1}). The latest release DY\_AB5 is public\footnote{It may be obtained from the author: bianconi@bs.infn.it}. This code is not a multi-purpose code. Its main advantages are in its specificity, and are: (i) easy insertion of new parametrizations for distribution functions associated with azimuthal asymmetries, (ii) easy control and modification of the code, (iii) possibility of simultaneous treatment of events in the Collins-Soper reference frame and in the fixed target or collider frame, (iv) fast generation of events, (v) satisfactory phenomenological reproduction of transverse momentum distributions. The present note is not the ``readme'' handbook of the code. The code itself is normally accompanied by a readme file supplying user help. Here I discuss the physical scheme used for the event generation. This is inspired by the parton model, but some simplifications or phenomenological parameterizations have been introduced into the standard relations for the cross section. The reasons behind these simplifications are two: (i) This code is not aimed at improving the theoretical understanding of quark-quark interactions; it is used for reproducing as realistic as possible event distributions and associated errors, in measures where some gross features of the data are already well known, while other ones are largely unknown. (ii) At the present stage, the real point is to understand whether or not certain measurements will be possible, or at which extent they will be possible. This requires a huge amount of exploratory simulations, to be run in the smallest possible time and with the maximum possible flexibility. I hope this presentation clarifies in which frameworks the code can be used, or should not be used. \subsection{Development notes} This code has been written in c++ since the first version. It began as a toy model Drell-Yan generator, aimed at fast exploratory simulation for the Drell-Yan measurement within the PANDA experiment\cite{panda_note}. After the very first applications, the number of options has increased exponentially. It was initially used by people of some experimental collaborations, in a form that permitted them to handle input in a simple way, assuming that they would not need touching the code. This has shown to be unrealistic. On the other side, unrealistic as well has been the hope that people could be made able to modify the code themselves, without interacting at all with the author. And also attempts to organize a ``once and for all'' form of the code have failed, just for the fact that the field is quickly evolving. For example, it is difficult to find a ``universal'' form for the new distribution functions that one could like to insert in the next five years. So, the general idea is that apart for a central core of classes/functions there is nothing sacred in the code structure, and that users should be able to modify the code via an as small as possible interaction with the author. The first versions like DY\_AB1 fully exploited the possibility of writing complex hierarchies, offered by c++. In DY\_AB4 this structured form was abandoned, but for efficiency purposes massive use was made of pointers. DY\_AB4 is well tested, and is the most efficient code of this series. A short presentation of this code may be found in \cite{panda_note}. Its main disadvantage was hard readability, since to increase efficiency it exploited systematically the fortran-style technique of organizing big data structures, with functions working on these data without explicitly getting them as arguments (the ``common'' areas in fortran; in c++ the same is obtained via pointers to data classes). The absence of explicit arguments in the function calls makes the code hierarchy difficult to see at first reading. DY\_AB5 is less efficient (about 30 \% more time-consuming). The advantage is that it is much easier to read and modify (pointers have mostly disappeared). It offers more pre-cooked options as far as unpolarized and single-polarization Drell-Yan are concerned. \section{General theoretical notes} \subsection{General scheme} This code typical cycle consists of the following steps: 1) Random event generation in the Collins-Soper frame for a specified class of projectile and of target hadrons (e.g. negative pion and polarized proton). 2) Transformation of these events first to the hadronic center of mass frame (``collider frame''), and next to the fixed target frame. 3) Loop over events taking into account the composition of the (molecular) target in terms of different nuclei. 4) If required, a number of repetitions of a multi-event simulation is performed, possibly with spin reversal. 5) If required, statistical analysis of azimuthal asymmetries is performed, calculating averages and fluctuations of the results obtained in the simulations of step (4). Here I discuss the general problems and some implementation detail connected with steps (1) and (2). The exact and complete cross sections on which the event generation is based may be found in \cite{BR_JPG2}. There, two simulation schemes are compared. DY\_AB5 is based on what is named ``scheme II'' in ref.\cite{BR_JPG2}. An alternative code has been prepared by this author, based on ``scheme I'', and in \cite{BR_JPG2} the differences in the outcome are shown. Also, the reasons are discussed why ``scheme I'' is more proper and supposed to take over in the long run, but not yet well suited given the present state of the art on the phenomenology side. Formulas reported in \cite{BR_JPG2} and implemented in DY\_AB5 are rather long. Here, a simplified version of these relations is discussed, to clarify the general cross section form, the main approximations contained in it, and the way the code exploits it. In the language of \cite{BR_JPG2} ``Scheme I'' is the parton model cross section for Drell-Yan events with most of the necessary details: several products of distribution functions associated with unpolarized and polarized partons and hadrons, and with all quark and antiquark flavors, are summed . ``Scheme II'' exploits some more approximations: (i) A cumulative event distribution is factorized out of the sum of all terms. This distribution is built by a set of simplified parton distribution functions $f$, and is phenomenological in the transverse momentum dependence. (ii) Generalizing an approximation method that may be found e.g. in\cite{Boer99} all the least known distribution functions $h$ (those associated with azimuthal asymmetries) are expressed as ratios $h/f$. (iii) Each such $h/f$ term is added to the sum, ``valence-weighted'' by ratios like 4/9 etc. \subsection{The overall cross section} The relations discussed in the following may be found, e.g., in \cite{Conway89} and \cite{Anassontzis}. The problem is that in these two works several of such relations assume systematically different forms. Since this difference is present in all the relevant literature, we name these two works ``ref.A'' and ``ref.B'' and systematically report the differences. I name ``mass'' $M$ the invariant mass of the lepton pair (it is also named $Q$ in some references). The indexes ``1'' and ``2'' represent the target and beam hadrons. The Drell-Yan differential cross section can be written in an approximate factorized way, inspired by the parton model (see \cite{Field}, chapter 5, and refs.A and B): \begin{equation} {d\sigma\over {d\tau dX_F dP_t d\Omega}} \ =\ {K(\tau) \over s}\cdot \bar S(\tau,X_F)\cdot S'(P_t)\cdot A(\theta, \phi, \phi_s). \end{equation} or equivalently in the form \begin{equation} {d\sigma\over {dX_1 dX_2 dP_t d\Omega}} \ =\ {K(\tau) \over s}\cdot S(X_1,X_2)\cdot S'(P_t)\cdot A(\theta, \phi, \phi_s). \end{equation} As customary, $s$ is the squared invariant mass of the two colliding hadrons. \begin{equation} s\ \equiv\ (E_{CM})^2\ =\ 2 m_p (m_p + E_{\bar p,LAB}). \end{equation} The scaling assumption means that the only dependence of eq(1) or (2) on $s$ should be contained in the $1/s$ term. $X_F$ and $\tau$ are invariant adimensional variables associated with the beam-axis momentum projection, and with the virtuality, of the virtual photon produced in $\bar p$ $+$ $p$ $\rightarrow$ $X$ $+$ $\gamma^*$ $\rightarrow$ $X$ $+$ $\mu^+\mu^-$. The pair $\tau$, $X_F$ can be substituted by the equivalent pair $X_1$, $X_2$. Then $\bar S$ and $S$ are related by a Jacobian factor. The definition of $\tau$, $X_F$, $X_1$ and $X_2$ is given below, where it requires some discussion. $\vec P_t$ is the transverse $2-$dimensional component of the 4-momentum of the virtual photon, with respect to the beam axis. (it is also named $q_t$, or $Q_t$). The angular variables $\theta$, $\phi$, $\phi_s$ appearing in $A(\theta, \phi, \phi_s)$ are measured in a reference frame where the virtual photon is at rest. In this frame $\theta$ and $\phi$ are the polar and azimuthal angles of the momentum of one of the two muons, while $\phi_s$ is the azimuthal angle of the target spin. These variables are discussed later, in the paragraph on the angular distributions. In the right hand sides of equations (1) and (2) $\bar S(...)$ and $S(...)$ differ because of the direct substitution $\tau$ $= $ $\tau(X_1,X_2)$, $X_F$ $=$ $X_F(X_1,X_2)$, and because $\bar S(...)$ $=$ $J S(...)$, where $J$ is the Jacobian of the coordinate transformation between $d\tau dX_F$ and $dX_1 dX_2$. The coefficient $K$ $=$ $K(\tau)$ is normally named ``K-factor'' and is predicted to be 1 in the parton model. Actually it is neither 1 nor constant (it is $\approx$ 2). Traditionally $K$ contains all the parton model violations, which are so kept apart from the rest of the cross section. Summarizing the PQCD corrections into a single $\tau-$depending factor is at a certain extent justified for $\tau$ far from its kinematic boundaries 0 and 1 (see \cite{Field}, subsections 5.2, 5.3, 5.5, and \cite{SSVY}), and for moderate transverse momenta $<<$ M. At these conditions the parton-parton $\rightarrow$ $\gamma^*+X$ cross section is dominated by those terms where $X$ subtracts no invariant mass from the parton-parton $\rightarrow$ $\gamma^*$ transition predicted in the plain parton model. The above cross section factorization is formally exact within the parton model, but in the above reported form it hides that $S'(P_t)$ is (weakly) dependent on $\tau$ and $X_F$, and that $A(\theta, \phi, \phi_s)$ depends on $X_1$, $X_2$, $P_t$, $M$. In the code these dependencies have been taken into account. The previous equations are however written in such a way to focus on the $assumed$ variable hierarchy that allows for a kinematic separation of the $S$, $S'$ and $A$ contributions (see e.g. refs.A and B for examples of the experimental procedure to extract these terms from incomplete phase space): (i) $S(X_F,\tau)$ does not depend on $P_t$ and on the angular variables, so for any assigned $X_F,\tau$ (or equivalently, $X_1, X_2$) it can be calculated from the cross section integrated over all the $P_t, \theta, \phi$ phase space and summed over spin; (ii) $S'(P_t)$ $\equiv$ $S'(X_f,\tau,P_t)$ does not depend on the angular variables, so it can be determined by $\theta, \phi$ integration plus a sum over spin. The function $S'(P_t)$ and $A(\theta,\phi,\phi_s)$ are defined so that \begin{equation} \int S'(P_t)\ =\ 1,\ \ \int A(\theta,\phi,\phi_s) d\Omega_{\theta,\phi}\ =\ 1, \end{equation} integrating over all the phase space, for any given $\phi_s$. So the total $\sigma$ is just the integral of $K(\tau) S(X_1,X_2)$. \subsection{The longitudinal term $S(X_1,X_2)$: definitions and dangerous ambiguities.} We can describe the meaning of $\tau$ and $X_F$ by the following relations: \begin{equation} \tau\ \equiv\ {M^2 \over s} \approx\ {M^2 \over M^2_{max}} \end{equation} \begin{equation} X_F\ \approx\ \Bigg({P^\gamma_z \over {{P_\gamma}_{max}}} \Bigg)_{CM} \end{equation} $\sqrt{\tau}$ is the ratio of the virtual photon virtuality $M^2$ to its kinematic maximum $s$, reached in an exclusive $\bar{p}p\rightarrow l^+l^-$ annihilation into dilepton. $X_F$ is approximately the ratio of the beam axis component of the virtual photon momentum (in the hadron collision CM) to its kinematic maximum. The precise definition of $X_F$ is not univoque in the literature, as discussed below. Whatever the exact definition, $X_F$ and $\tau$ are normally defined as $measurable$ scalar functions of the projectile and target four-momenta. Alternatively, they can be substituted by their combinations $X_1$, $X_2$, whose approximate meaning is the ratio of the longitudinal component of each of colliding quarks to the parent hadron momentum in the reaction CM: \begin{equation} X_i\ \approx\ \Bigg({{(P_z)_{quark}} \over {(P_z)_{hadron}}}\Bigg)_{CM}. \end{equation} For $X_1$ and $X_2$ several definitions can be found, all approximately equivalent at large $s$ and $M$. These definitions fall into two groups: (a) ``theoretical'' definitions, given in terms of the (unmeasured) light cone momenta of the colliding quarks; (b) ``experimental'' definitions (as in ref.A or B), which express $X_1$ and $X_2$ as combinations of the (measured) variables $\tau$ and $X_F$. In the ``theoretical'' case, $X_i$ is the ratio of the large light-cone component of the $i-$quark momentum to the corresponding component of the momentum of its parent hadron. For a rigorous theoretical definition of $X_1$ and $X_2$ see e.g. \cite{BDR}. In the high energy limit experimental definitions are supposed to reproduce the corresponding theoretical ones, so to access approximately to the quark momenta. It must be remarked that this is $not$ the situation, in the some portions of the kinematic range considered here. The definition of $\tau$ is the same in refs.A and B, and seemingly ``$\tau$'' means the same thing in all the literature on the subject: \begin{equation} \tau\ =\ M^2/s\ \ \ \ (refs.\ A\ and\ B) \end{equation} On the contrary, for $X_F$, $X_1$, and $X_2$ we have non univoque definitions. Ref.A uses \begin{equation} X_F\ =\ {{2 P_L} \over \sqrt{s}}\ \ \ \ (ref.\ A) \end{equation} \begin{equation} X_F\ =\ X_1-X_2,\ \ \ \tau\ =\ X_1 X_2 \ \ \ (ref.\ A). \end{equation} Ref.B uses \begin{equation} X_F\ =\ {{2P_L} \over {\sqrt{s}(1-\tau)}}\ \ \ \ (ref.\ B) \end{equation} \begin{equation} X_F\ =\ (X_1-X_2)/(1-\tau), \ \ \tau\ =\ X_1 X_2 \ \ (ref.\ B). \end{equation} The definitions of ref.A are easier to use and more common in the literature, so the code sticks to them. With them it is necessary to take care with the kinematic limits: $\vert X_F \vert_{max} $ $<$ 1. When comparing differential cross sections referred to the variables $X_1, X_2$ or $X_F, \tau$, jacobian conversion factors are necessary. The differential cross sections of equations (1) and (2) enjoy complete scaling properties for the $\bar{p}p$ case, while in the $\pi p$ case a mass-dependent\cite{BergerBrodsky79} term is introduced by ref.A, it is important for large $X_\pi$ and considered in the code. Full scaling means that $(P_t,\theta,\phi)-$integrated cross sections only depend on the $X-$set variables, apart for an $s$ dependence confined to the $1/s$ term. Compared data of ref.A (250 GeV/c beam energy) and ref.B (125 GeV/c) on $\pi-p$ DY scattering show approximate scaling. $K$ is assumed to be a function of $\tau$ in ref.A (and in the code), while most experiments (see the DY database from\cite{HEPDATA}) use it as a constant normally $\approx$ 2. For large $\tau$ values, the data by ref.A show that this dependence is relevant, and the results of the calculation by \cite{SSVY} support this point. The largest part of the events concentrate at the lowest part of the involved $\tau-$range (wherever it begins), and this may make it difficult to scan a large $\tau-$range, to establish precise dependencies for $K$ on $\tau$. As remarked in ref.B, the choice of the $K$ value depends on the choice of the normalization for the quark distribution functions, which is not univoque. Ref.B reports a detailed and systematic discussion of the different normalization methods and of the consequent changes in the values of the distribution function parameters and of $K$. This can be exploited, with the warning of using distribution functions according to the notations of ref.B (see below). In the default version of the code DY\_AB5, $S(\tau,X_F)$ has been reconstructed using the parameterized form given in appendix A of ref.A, together with the kinematic definitions and structure functions contained in the main text of ref.A. This allowed me to fit $\pi^--$Tungsten DY differential cross sections reported by that experiment at 252 GeV/c. When notations of the two works are conformed each other, the distribution functions fitted from ref.A allow to reproduce reasonably $\pi^--$data reported by ref.B at 125 GeV/c. To reproduce the $\bar p-A$ DY data of the same experiment, proton quark distribution functions must be used as antiproton antiquark distribution functions, and then the reproduction is reasonable. In both references and everywhere in the literature $S(X_1,X_2)$ has the form \begin{equation} S(X_1,X_2)\ =\ G(X_1,X_2) \Sigma_i \bar F(X_1) F(X_2) \end{equation} where $G(X_1,X_2)$ is a kinematic factor proportional to $(1/X_1X_2)^2$ (ref.A) and $1/X_1X_2$ (ref.B). In general its exact form depends on notations and changes from paper to paper. The exponent ``1'' or ``2'' in the $1/X_1X_2$ factor is very important because it indicates whether the distribution functions $F(X)$ must be read as $F(X)$ or as $X F(X)$ (see below). $\bar F$ and $F$ are linear combinations of the $\bar q/q$ main distribution functions $u(X)$, $d(X)$, $s(X)$. For these we have \begin{equation} X\cdot u(X)_A\ = u(X)_{B}, \ \ X\cdot d(X)_A\ = d(X)_{B},\ etc. \end{equation} So, e.g., the normalization $\int X dX (u+d)$ $=$ 0.34 in ref.A becomes $\int dX (u+d)$ $=$ 0.34 in ref.B. The $\alpha$ parameter appearing in the typical parameterization $u(X)$ $=$ $X^\alpha (1-X)^\beta$ changes by one unit passing from ref.A to ref.B, and so on. The important remark is that this ambiguity $q(X)$ $\leftrightarrow$ $Xq(X)$ is present throughout all the literature on the subject, not only in these works. This is a very delicate point and must be taken into account whenever new terms are added to the code. \subsection{The $P_t$ dependent $S'(P_t)$.} The traditional parton model literature is built on the collinear approximation. So, for the $P_t-$dependent parts one must rely on phenomenological fits. Experiments A and B did not impose a low-$P_t$ cutoff, with the consequence that their small $P_t$ data show a completely different qualitative behavior. Measured values of the function $S'(P_t)$ can be seen e.g. in ref.A figs.23 and 25 ($\pi+p$ case), or in ref.B fig.9 ($\pi+p$ and $\bar p+p$ cases). Since azimuthal asymmetries are very small for $P_t$ $<$ 1 GeV/c, this difference is not relevant for the purposes of planning experiments on azimuthal asymmetries in Drell-Yan. For pions the default option is the distribution used in ref.A. The code however offers a series of alternatives. The class that handles all $P_t$-distributions is PT2. The use of a specific distribution requires subclassing. Some options are already present in the code DY\_AB5: PT2\_Old : public PT2 is further subclassed into three possibilities: (1) The distribution by Conway et al\cite{Conway89} to reproduce $\pi^--$tungsten at 250 GeV/c. (2) The one by J.Webb\cite{Webb} (E866 collaboration) for proton-nucleus at 800 GeV/c. hep-ex/0301031 (3) The one by Chang\cite{Chang} (E866 collaboration) for the same measurement of J.Webb at the $J/\psi$ mass. (4) The class PT2\_Simple\_Asym : public PT2 of the form $NP_t^n/(P_t^2+P_o^2)^m$. This shape is useful for the azimuthal asymmetry terms (see below). The most relevant features of the measured $S'(P_t)$ are (i) a not too strong dependence on $X_1,X_2,s$, (ii) for $P_t$ $<$ 2 GeV/c the distribution is not steeply decreasing (in case ref.A it is increasing up to 0.5 GeV/c); For $P_t$ $>$ 2 GeV/c it decreases steeply (but with power law, not exponential); (iii) the average $P_t$ is near 1 GeV/c, and as well known (see e.g. \cite{Field}) it is larger than in lepton-induced DIS and in hadron-hadron semi-inclusive meson production. In the preliminary simulation of a Drell-Yan experiment on azimuthal asymmetries, a good phenomenological shape of $S'(P_t)$ is a key success factor, because measured and/or predicted leading-twist azimuthal asymmetries do increase at increasing $P_t$, obliging the experiment to select events at a as large as possible $P_t$. However, due to the very fast decrease of $S'(P_t)$ for $P_t$ $>$ 2 GeV/c, the choice of a too large $P_t$ cut-off can make a measurement prohibitive because of fast falloff of the event rates at large $P_t$. Ref.A reports an explicit parameterization for $S'(P_t)$ (relative to $\pi-$induced Drell-Yan). Ref.B reports data and some models for the $P_t$ distributions in both $\bar p p$ and in $\pi p$ DY. In the region $P_t$ $>$ 3 GeV/c error bars are too big to draw any conclusion, but for $P_t$ up to 3 GeV/c $\pi p$ and $\bar p p$ $P_t-$distributions are very similar. For pion and antiproton projectiles I did not modify the parameterization inherited from appendix A (pion) of ref.A and from ref.B ($\bar{p}$). The pion one is $not$ scale-independent. It depends explicitly on $M$ $=$ $s\tau$, and produces a slow increase of the average $<P_t^2>$ at increasing $s$ and constant $\tau$. For proton projectiles, the parameterization by J.Webb\cite{Webb} is probably more recommendable. \subsection{Isospin/flavor composition.} Up to a few years ago the models on the functions associated with azimuthal asymmetries didn't go to such details like the $Z/A$ composition of the target. For this reason, the first code DY\_AB1 did not care isospin/flavor matters. After Hermes and Compass results, some flavor and isospin-dependent parameterizations for single-spin asymmetries have appeared (see e.g.\cite{BRMCc,Torino05,VogelsangYuan05,CollinsGoeke05}), obliging the codes since DY\_AB2 onwards to take these problems into account. On the experimental side the Z/A composition is important for another reason: it determines the effective dilution factor of the target polarization. DY\_AB5 takes into account events coming from separate pieces of a molecular target, taking the individual dilution factor of each nucleon into account. So to reproduce a $NH_3$ target with 85 \% polarized $H$, one may arrange code parameters so to require about 4 events on an unpolarized proton or neutron and one event on a polarized proton. After this, any specific sorted event, e.g. $\pi_--$neutron, will actually need to be translated into a $\bar{u}u$, a $\bar{d}d$ or a $\bar{s}s$ event. In DY\_AB5 this is taken into account by $X-$averaged weighting factors. The criterion and the weights for the relevant cases are discussed in detail in ref.\cite{BR_JPG2}. There, also the errors associated with this technique are shown, in ``approximate vs exact'' scatter plots. The point is that a sum of the kind \begin{equation} \sum_{flavor} \Big( S_{leading}[1 + S_{asymmetry}/S_{leading}] \Big) \end{equation} where each term is flavor-specific, is approximated in the form \noindent \begin{equation} S_{leading} \Big(1 + \sum_{flavor} W_{flavor} [S_{asymmetry}/S_{leading}] \Big), \end{equation} where the leading term is common, the weights are constant (and of course depend on the projectile-target hadron pair for the sorted event), the terms $[...]$ are flavor-specific functions where it is not possible to separate the numerator from the denominator. Weight factors are needed to compensate the fact that e.g. the asymmetries associated with $\bar{d}d$ collisions have scarce relevance in a process like $\bar{p}p$ or $\pi_- p$, while they have more in $\pi_+n$. As observed in \cite{BR_JPG2}, the discrepancies between the ``exact'' scheme and the constant-weight scheme are relevant when both the colliding hadrons have small $X$. In practice, planned experiments will try to stay as far as possible from this region, since common belief is that transverse spin effects are small there. In addition, although it would be better to sort events according to the former scheme, phenomenological parameterizations do normally extract the ratios $[S_{asymmetry}/S_{leading}]$ according to the latter scheme. Paradoxically, as remarked in \cite{BR_JPG2}, this makes the latter scheme more proper than the former for a simulation. So, although the author of this note has prepared since long a ``DY\_AB6'' code working according with a more exact scheme (the one used for the simulations of \cite{BR_JPG2}), such a code is unlikely to be useful for still some time. \subsection{Angular distributions} Angles $\theta$, $\phi$ and $\phi_s$ in the function $A(\theta, \phi, \phi_s)$ are measured in a reference frame with origin in the center of mass of the muon couple. In other words, the origin of this frame coincides with the virtual photon position. The axes of this frame can be oriented in several ways. One leads to the so-called Collins-Soper frame\cite{CollinsSoper77} (CS), with $\hat z$ axis parallel to the difference of the momenta of the projectile and of the target $nucleon$. The transverse axes are oriented so that the $xz$ plane contains the virtual photon momentum. Other common alternatives put the $\hat z$ axis along the beam or target direction in the lepton CM. The DY\_AB5 code sorts events in the CS frame. This fact becomes relevant when events are transformed to the fixed target or to the collider frame (see related section below). In the CS frame, $\theta$ and $\phi$ are the angles formed by $one$ of the muons ($\mu^+$ from now onwards), and $\phi_s$ is the target spin orientation. For a $qualitative$ understanding of the kinematics for most (not all) events, one may imagine a virtual photon with transverse momentum not much larger than 1 GeV, and $\vert X_F\vert$ not too close to zero, so that the longitudinal momentum of the photon is larger than the transverse momentum. Then the CS $z$ axis is roughly parallel to the collider one. Also the CS $xy$ plane is roughly parallel to the collider $xy$ plane, but the CS $x$ and $y$ axes are randomly rotated by an angle $\phi_{coll}$ with respect to their collider configuration. This $\phi_{coll}$ is the angle between the $xy-$component of the virtual photon and the $x$ axis in the lab. The $x$ axis of the CS frame coincides physically with the transverse component of the virtual photon 3-momentum. Then the transverse proton spin, which is fixed along the $x$ axis in the collider frame, lies on the CS $xy$ plane with angle $\phi_s$ $=$ $-\phi_{coll}$. These approximations are not used in the code, but can be useful to understand the meaning of the employed angles, and to have an idea of the distribution of useful events in a collider frame. In the Collins-Soper frame the angles $\theta$ and $\phi$ are polar and azimuthal angles of the momentum of one of the two leptons and are randomly distributed. Also the spin angle $\phi_s$ is randomly distributed in the CS frame. In absence of information on two of these angles, the third one is homogeneously distributed over all of its phase space. However, altogether their distributions are correlated by $A(\theta, \phi, \phi_s)$ in the cross section. In the simulation events, are initially flat-sorted with respect to all the kinematic variables $X_1$, $X_2$, $P_t$, $\theta$, $\phi$, $\phi_s$. The sorted events are accepted/rejected according to the cross section expressed by eq.(1) where, at the level of single spin experiments, \begin{equation} A(\theta, \phi, \phi_s) \ =\ 1\ +\ cos^2(\theta)\ +\ {{\nu(X_1,X_2,P_t)} \over 2} sin^2(\theta)cos(2\phi) \ +\ \nonumber \end{equation} \begin{equation} +\ \vert \vec S_2 \vert B( X_1, X_2, P_t, M,\theta, \phi, \phi_s ) \end{equation} The unpolarized asymmetry measured in ref.A is contained in the $\nu...$ term. Single spin asymmetries arise from the $\vert \vec S_2 \vert B(...)$ term. Two origins for such terms have been considered here, corresponding to different origins for the azimuthal term. The Sivers\cite{Sivers} asymmetry considers a term of the form \begin{equation} B = 2 {m_p \over M} sin(2\theta) sin(\phi-\phi_s) H_a(X_1,X_2) \end{equation} while the Boer-Mulders\cite{Boer99,BoerMulders98} asymmetry is of the form \begin{equation} B = - {1 \over 2} \sqrt{\nu \over \nu_{max}} sin^2(\theta) sin(\phi+\phi_s) H_b(X_1,X_2) \end{equation} Any of the above azimuthal asymmetries (unpolarized, Sivers, BMT) can be disentangled from the other two by a suitably weighted $\phi$ integration. The ``statistical analysis'' option of the code offers this possibility. \subsection{Parameterizations for the nonstandard distribution functions} As above written, the leading distribution functions are bypassed by assuming a phenomenological (correctly behaving and scaling in a wide range of kinematics) form for the ``standard'' event distribution, i.e. for the $(\theta,\phi)-$averaged part of the event distribution. For the ``nonstandard'' terms (i.e. those ones that produce nontrivial angular distributions in the CS frame) the code includes as a first possibility some phenomenological parameterizations that have become recently available in the literature, and as a second option the possibility to chose freely the parameters for simple pre-determined shapes. The functions associated with the nonstandard terms have been put in places where it is easy to find them (in the file c\_dy\_master.cpp). So, if one wants to change radically the shape of these functions it is possible to do it. The code has been written assuming that any potential user could be interested in adding supplementary terms to this set. In other words, I assume that in the next ten years there will be no special reasons to change the ``standard'' part of the cross section, while updates and news on the asymmetry side will be frequent. Although it is possible to add new parameterizations, respecting some formats makes things easier. Here I would like to discuss these formats. 1) Let us write the $A$ term in eq.(17) in the most general form \begin{equation} A(......)\ \equiv\ 1\ +\ cos^2(\theta)\ +\ \sum \Big( F(X_1,X_2,P_t) F'(\theta)F''(\phi,\phi_s) \Big) \end{equation} It must be reminded that according with the scheme adopted in this code each $F$ term is the $ratio$ between a term associated with a given kind of angular asymmetry and the ``standard'' term, that carries with itself the $1+cos^2(\theta)$ angular dependence. Since most of the available parameterizations give directly such a ratio, this is the most convenient form, as earlier discussed in this work. 2) The code assumes factorization between longitudinal and transverse degrees of freedom, and between terms coming from projectile and target, in the functions terms $\nu$ and $B$ in eq.(17). These functions have the form \begin{equation} F(X_1,X_2,P_t)\ \equiv\ f_1(X_1)f_2(X_2)f_t(P_t), \end{equation} 3) For the longitudinal components $f_i(X_i)$ the code assumes the form \begin{equation} f_i(X)\ \equiv\ N X^\alpha(1-X)^\beta \end{equation} To insert completely different functions is possible but it is of course much easier to change three parameters for any quark flavor. 4) The transverse term is \begin{equation} f_t(P_t)\ =\ { { \int d^2 k_1 d^2 k_2 [g_1(\vec k_1)g_2(k_2)]_{asymmetry}\delta^2(\vec P_t - \vec k_1 - \vec k_2). } \over { \int d^2 k_1 d^2 k_2 [g_1(\vec k_1)g_2(k_2)]_{standard}\delta^2(\vec P_t - \vec k_1 - \vec k_2). } } \end{equation} \noindent where as previously reminded, the numerator is the really asymmetric term, the denominator is the leading standard $P_t-$dependence. 5) For the transverse part it is responsibility of the user to introduce directly the convolution of the two transverse momentum distributions corresponding to the colliding partons. In other words, the code assumes that one directly introduces $f_t(P_t)$ in eq.(20). The last constraint is not a true constraint, since all the parameterizations known to me employ gaussian shapes, whose convolution may be easily computed in analytical way. On the other side, this means spared execution time for the code. 7) Frequently, the result of eq.(23) $f_t(P_t)$ will assume the form that I name ``$f_{simple}(P_t)$'': \begin{equation} f_{simple}(P_t)\ \equiv\ NP_t^n/(P_t^2+P_o^2)^m. \end{equation} \noindent Because of the common use of Gaussian distributions, this form for the ratio $f(P_t)$ is quite common. So the code gives, among other options, this one. To implement the calculated $P_t-$convolutions, or to insert phenomenological ones (including the ``standard'' term $S(P_t)$ in eq.1) the code offers a class PT2 that may be subclassed two ways: (i) Exploiting the class PT\_Simple\_Asym : public PT2 to directly insert the parameters $N$, $P_o$, $n$, $m$, into the form $NP_t^n/(P_t^2+P_o^2)^m$. (ii) Subclassing the class PT2 by another user-assumed distribution. As above discussed, the code itself offers several examples of this procedure, i.e. all the relevant alternatives offered for the $P_t$-dependence of the leading ``standard'' term. The already present $default$ parameterizations correspond, for the Boer-Mulders effect, to the $x-$independent $\nu(k_t)$ function given in \cite{Boer99}, for the Sivers effect to the two alternatives \cite{BRMCc,Torino05}. For the $k_t-$unintegrated Transversity distribution the default $(N,\alpha,\beta)$ parameters are $(1,0,0)$ and no predefined set was considered. \section{Interesting Plots} In the following, some collections of simulated Drell-Yan events are compared with the measured distributions, for negative pions on Tungsten. At the kinematics of interest for DY\_AB5 the two experiments with far the best statistics in the mass range 4-9 GeV/c$^2$ are E615 at Fermilab and NA10 at CERN. The code DY\_AB has been based on the scheme and parameters given by E615 in \cite{Conway89}, however it works equivalently with NA10 data\cite{NA10_194,NA10_286}, coming from nearby kinematic regions. \subsection{Comparison with experimental data: E615} To produce fig.1, DY\_AB5 has sorted 100,000 Drell-Yan events for negative pions with beam energy 252 GeV on a Tungsten target, in the dilepton mass range 4-7 GeV/c$^2$. From this set, I extract the subset of events with $\sqrt{\tau}$ in the range 0.254-0.277 (mass from about 5.5 to 6 GeV/c$^2$). The distribution of these events with respect to $x_F$ may be compared with the cross sections given in \cite{Conway89}, table VI. \begin{figure}[ht] \centering \includegraphics[width=9cm]{fig_conway1.eps} \caption{ Filled squares: E615 data for $\sqrt{\tau}$ in the range 0.254-0.277. Empty squares with joining line: Simulation (see text). \label{fig:X1}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=9cm]{fig_conway3.eps} \caption{ Filled squares: E615 data (252 GeV pion beam) for $\sqrt{\tau}$ in the range 0.392-0.415. Empty squares with joining line: Simulation (see text). \label{fig:X2}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=9cm]{fig_conway_qt1.eps} \caption{ Filled squares: E615 data for $x_F$ in the range 0-0.1. Empty squares with joining line: Simulation (see text). \label{fig:X3}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=9cm]{fig_conway_qt2.eps} \caption{ Filled squares: E615 data for $x_F$ in the range 0.6-0.7. Empty squares with joining line: Simulation (see text). \label{fig:X4}} \end{figure} In fig.1 the empty squares joint by a line are the event numbers sorted by DY\_AB5. Horizontal bars represent the size $\Delta x_F$ $=$ 0.1 of each bin, vertical bars the statistical error $\sqrt{N}$. The full black squares with no horizontal error bar are the numbers from table VI of \cite{Conway89}, rescaled by a common constant factor so to transform cross section values into expected values for the bin populations. Sets of data corresponding to smaller $\sqrt{\tau}$ cover a slightly smaller range in $x_F$, while at increasing $\sqrt{\tau}$ the event numbers filling each experimental bin decrease, and error bars increase. For comparison, in fig.2 we report data from table VI of \cite{Conway89} in a mass range near the upper edge 9 GeV/c: $\sqrt{\tau}$ ranges from 0.392 to 0.415 (mass between about 8.5 and 9 GeV/c$^2$). Clearly, the error bars are much bigger than in the case of fig.1. The corresponding simulated event numbers are extracted from a set of 100,000 sorted events between mass values 7 and 9.2 GeV/c$^2$. Data from \cite{Conway89} do not cover $x_F$ $<$ $-0.1$ or $-0.2$. This is a typical situation in fixed target experiments, where $x_{projectile}$ $\sim$ 1 and $x_{target}$ $\sim$ 0. The data and simulations in figs.1 and 2 are integrated with respect to the transverse momentum of the dilepton pair. In figs. 3 and 4 I report distributions with respect to $P_t$, for two assigned ranges of $x_F$: 0-0.1 (fig.3) and 0.6-0.7 (fig.4). Here data and simulated distributions are integrated with respect to the mass (equivalently, to $\tau$) over the mass range 4-9 GeV/c$^2$. The meaning of open and filled squares is the same as in figs.1 and 2. Data points come from the same experiment of \cite{Conway89}, (they are reported explicitly in \cite{HEPDATA}; in \cite{Conway89} figures on the $P_t-$distributions are present, but a table of values is not reported) \subsection{Comparison with experimental data: NA10} The two richest collections of events at the kinematics interesting here have been provided by the collaboration NA10, with negative pions of 194 and 286 geV on Tungsten\cite{NA10_194,NA10_286}. For the beam at 194 GeV, the data may be found in the final table of \cite{NA10_194}, while the data relative to the upper energy beam have been taken from \cite{HEPDATA}, to which they have been sent as a private communication. This experiment did not publish transverse momentum distributions. In addition, the covered $x_F$ range tends to become rather narrow near the lower dilepton mass value 4 GeV/c$^2$, where most events concentrate. So the most interesting distributions are at larger mass values with respect to E615. \begin{figure}[ht] \centering \includegraphics[width=9cm]{fig_na10_low1.eps} \caption{ Filled squares: NA10 data at 194 GeV, for $\sqrt{\tau}$ in the range 0.33-0.36. Empty squares with joining line: Simulation (see text). \label{fig:X5}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=9cm]{fig_na10_up1.eps} \caption{ Filled squares: NA10 data at 286 GeV, for $\sqrt{\tau}$ in the range 0.33-0.36. Empty squares with joining line: Simulation (see text). \label{fig:X6}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=9cm]{fig_na10_up2abc.eps} \caption{ Filled squares: NA10 data at 286 GeV, for $\sqrt{\tau}$ in the ranges 0.21-0.24 (top), 0.24-0.27 (middle), 0.27-0.3 (bottom). Empty squares with joining line: Simulation (see text). \label{fig:X7}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=9cm]{fig_na10_top12.eps} \caption{ Filled squares: NA10 data at 286 GeV, for $\sqrt{\tau}$ in the ranges 0.51-0.54 (top) and 0.54-0.63 (bottom). These are equivalent to dilepton masses over the bottonium region. Empty squares with joining line: Simulation (see text). \label{fig:X8}} \end{figure} Data in fig.5 come from NA10 (194 GeV beam), and the simulated events are a subset of 100,000 events sorted by DY\_AB5 in the mass range 4-9 GeV/c$^2$, assuming a negative pion beam of 194 GeV hitting Tungsten. The data in figs. 6, 7 and 8 all refer to the upper energy beam of NA10, and their simulated counterparts have been extracted from a set of 100,000 events sorted between 4 and 7 GeV/c$^2$, of 100,000 events between 7 and 9 GeV/c$^2$, and of 50,000 events between 11 and 15 GeV/c$^2$. For the lower energy beam (pions of 194 GeV, meaning $s$ $\approx$ 364 GeV$^2$) fig.5 reports the $x_F$ distribution for $\sqrt{\tau}$ in the range 0.33-0.36, equivalent to dilepton mass between about 6.3 and 6.9 GeV/c$^2$. Fig.6 considers the same $\sqrt{\tau}$ range, for the case of the upper energy beam (286 GeV, meaning $s$ $\approx$ 537 GeV$^2$). In this case this corresponds to dilepton mass between 7.6 and 8.3 GeV/c$^2$. According with the scaling hyppothesis, the two data distributions should be the very similar in the $x_F$ range 0-0.6 covered by both measurements. For $\sqrt{\tau}$ $<$ 0.3, the $x_F$ range covered by NA10 becomes small and data distributions are rather flat. Fig.7 reports data and simulations for the three $\sqrt{\tau}$ ranges 0.21-0.24, 0.24-0.27, 0.27-0.3, corresponding to masses ranging from 4.9 to 6.9 GeV/c$^2$. The compared examination of the three sets of fig.7 suggests that (i) the simulation is worse for smaller $\sqrt{\tau}$, (ii) the $K-$factor extracted from E615 is slightly less steep (in its dependence on $\tau$) than the one extracted from NA10. The former fact depends on the increasing relevance of sea values for $x_{projectile}$ at small masses. The default distribution for $x_\pi$ in DY\_AB5 comes from E615 where sea (anti)quarks of the pion play a minor role in the fit of the pion distributions. For this reason, data from both NA10 and E615 are reasonably fitted for $x_F$ $>$ $-0.1$ with the exception of small-$\sqrt{\tau}$ distributions. A signal of the relevance of sea partons from the pion side is the shift of the $x_F-$distribution peak towards $x_F$ $=$ 0. Valence-dominated measurements show this peak at $x_F$ $=$ 0.1-0.3. When the sea of the pion becomes relevant, it is probably better to substitute the default sea pion distribution of DY\_AB5 with more recent ones. Fig.7 suggests that this may be the case for $\sqrt{\tau}$ below 0.25. An interesting feature of NA10 is the presence of a conspicuous set of data at masses over the bottonium mass. This is $not$ the situation for which DY\_AB5 has been thought, however it is interesting to try and simulate these events. Fig.8 reports data and simulation for $\sqrt{\tau}$ in the ranges 0.51-0.54 (mass between 11.8 to 12.5 GeV/c$^2$) and 0.54-0.63 (mass between 12.5 and 14.6 GeV/c$^2$). We see that DY\_AB5 has difficulties in reproducing the shape of these event distributions for $x_F$ $<$ 0.2. Actually, the ``gap'' of these data distributions at $x_F$ $\approx$ 0 looks a little unnatural. More in general, in all the previous figures the agreement between montecarlo and data is worse for negative $x_F$, where data distributions fall rather steeply. This could be related with the fact that negative $x_F$ data are at the border of the regions of good acceptance for fixed target experiments. \subsection{Sivers asymmetry plots in different calculation schemes} As observed in Section 2.5, the event cross section may be written in the form $\sigma$ $=$ $\sigma_0(X_1,X_2,P_t,\theta)$ $[1+A(X_1,X_2,P_t,\theta,..)]$, where the former term expresses that part of the cross section that does not contain azimuthal and spin asymmetries, while the asymmetries themselves are contained in $A$, and where $A$ may be approximated in different ways. In particular, in DY\_AB5 $A$ contains flavor weight factors, that are absent a more recent version DY\_AB6. In DY\_AB6 the full cross section $\sigma$ is a sum of independent terms each referring to a sea or valence parton. Each flavor contribution carries ``its own'' asymmetry. In DY\_AB5 $\sigma_0$ is a sum of flavor contributions, and $A$ is an independent sum of flavor contributions (see\cite{BR_JPG2}), weighted with good sense. \begin{figure}[ht] \centering \includegraphics[width=9cm]{pionminus_sivBR_mup.eps} \caption{ Scatter plot of asymmetries calculated by DY\_AB5 (horizontal) and by DY\_AB6 (vertical), for 7000 sorted $\pi^-p$ Drell-Yan events at $s$ $=$ 100 GeV$^2$, in the mass range 4-9 GeV/c$^2$. From \cite{BR_JPG2}. \label{fig:X9}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=9cm]{pionplus_sivTo2_mlow.eps} \caption{ Scatter plot of asymmetries calculated by DY\_AB5 (horizontal) and by DY\_AB6 (vertical), for 20000 sorted $\pi^+p$ Drell-Yan events at $s$ $=$ 100 GeV$^2$, in the mass range 1.5-2.5 GeV/c$^2$. From \cite{BR_JPG2}. \label{fig:X10}} \end{figure} As far as azimuthal/spin asymmetries are neglected, the two codes produce the same results ($\sigma_0$ is the same in both cases). When asymmetries are included, the scheme implemented by DY\_AB6 is more proper. Exploiting the equality of $\sigma_0$ in the two cases, in \cite{BR_JPG2} Drell-Yan events have been sorted according with $\sigma_0$ only, and the corresponding Sivers asymmetry (deriving from the $A$ term) has been calculated within the relations of DY\_AB5 and of DY\_AB6. The the scatter plot of the two asymmetry calculations for each event are reported in several figures. From that work two figures are borrowed, showing the two most ``extreme'' cases. Fig.9 reports 7000 events for $\pi^-p$ Drell-Yan at $s$ $=$ 100 GeV$^2$, lepton invariant mass in the range 4-6 GeV/c$^2$, transverse momentum in the range 1-3 GeV/c. The Sivers asymmetry is parameterized according to \cite{BRMCc}. Fig.10 reports 20000 events for $\pi^+p$ Drell-Yan at $s$ $=$ 100 GeV$^2$, lepton invariant mass in the range 1.5-2.5 GeV/c$^2$, transverse momentum in the range 1-3 GeV/c. The Sivers asymmetry is parameterized according to \cite{Torino05}. In the former case DY\_AB5 and DY\_AB6 would produce the same Sivers asymmetry. In the latter the difference is striking. As deeply discussed in \cite{BR_JPG2} the difference between the two is proportional to the role of sea (anti)quarks. Positive pions on protons, and small dilepton mass, enhance the role of sea partons. For these reasons, the code DY\_AB6 has been prepared, aiming at situations where sea partons could become more and more relevant. However, the use of DY\_AB6 has been up to now restrained by the fact that the available parameterizations of functions like the Sivers one are normally produced by fitting data within the same ``ideological'' scheme of DY\_AB5. In these cases, the use of DY\_AB6 would increase errors, instead of decreasing them. This is discussed in \cite{BR_JPG2} with plenty of details. The main point is that when a ``global fit'' is undertaken using a scheme like the one of DY\_AB5, the effect of sea partons is effectively included inside valence quark distributions. So, when the results from such a global fit are emploied in DY\_AB6 for predicting some result, it is very dangerous to add separate sea quark contributions that are already present in valence quark distributions. The place where DY\_AB6 can be more proper is the one of $theoretical$ models for the Sivers function, built according with the scheme where each flavor is individually considered. But for the aim of modelling an experiment apparatus, one normally prefers using phenomenological parametrizations, rather than theoretical models.
0806.0045
\section{Introduction} Laser-plasma interaction (LPI) \cite{kruer-lpi-1988} is an important plasma-physics problem which poses serious challenges to theoretical modeling. LPI is the basis of several applications, including laser-based particle acceleration \cite{tajima-laseraccel-prl-1979} and the backward Raman amplifier \cite{malkin-ramanamp-prl-1999}. Moreover, for inertial confinement fusion (ICF)\cite{atzeni-icf-2004,lindl-nif-pop-2004} to succeed, LPI must not be so active that it prevents the desired laser energy from being delivered to the target, with the desired spatial and temporal behavior. This paper focuses on modeling the backscatter instabilities, where a laser light wave (mode 0) decays into a backscattered light wave (mode 1) and a plasma wave (mode 2). In stimulated Raman scattering (SRS) and stimulated Brillouin scattering (SBS), the plasma wave is, respectively, an electron plasma wave and an ion acoustic wave. These LPI processes pose a serious risk to indirect-drive ICF \cite{lindl-nif-pop-2004}. A wide array of computational tools is used to model LPI, ranging from rapid ($\sim$secs) calculations of linear gains along 1D profiles to massively-parallel kinetic particle-in-cell simulations. We present here a new tool, called \textsc{deplete}, to the less computationally expensive end of this spectrum. \textsc{deplete}{} solves for the pump intensity and scattered-wave spectral density for a set of scattered frequencies, in steady-state, along a 1D profile of plasma conditions. Pump depletion is included, and the plasma waves are assumed to be in the strong damping limit (i.e., they do not advect). Fully kinetic (although linear) formulas are used for various quantities like the coupling coefficient. Bremsstrahlung noise and damping, as well as Thomson scattering (TS), are included. The \textsc{deplete}{} model, especially the noise sources, in some ways resembles that of Ref.\ \cite{berger-srsnoise-pofb-1989}. Other similar works which have influenced our thinking, and use 1D coupled-mode equations, are Refs.\ \cite{ramani-sbs-pof-1983}-\cite{mounaix-lpi-pre-1997}. \textsc{deplete}{} is a 1D model, but the plasma conditions are generally found by tracing 3D geometric-optics ray paths through the output of a radiation-hydrodynamics code. We therefore call this combined approach to studying LPI a ray-based one. Details of this methodology, and its limits, are discussed in Sec.\ \ref{s:ray}. \textsc{deplete}{} is similar to the code \textsc{newlip}, which calculates linear gains for SRS and SBS along 1D profiles (\textsc{newlip}{} is discussed here in Appendix A). Both codes take seconds to analyze one profile from the laser entrance to the high-Z wall in an ICF ignition design. However, \textsc{deplete}{} includes substantially more physics than \textsc{newlip}, such as pump depletion, noise sources, and re-absorption of scattered light. \textsc{deplete}{} moreover provides pump and scattered intensities, which unlike gains can be directly compared with experiment and more sophisticated LPI codes. Despite its simplicity, \textsc{deplete}{} agrees well in certain cases with results from the 3D paraxial laser propagation code \ftd{}. This is quite promising given \textsc{deplete}'s much lower computing cost. There is important physics which \textsc{deplete}{} does not capture, with laser speckles or hot spots being one of the most important. Recent SBS experiments \cite{froula-sbs-prl-2007, neumayer-sbs-prl-2008} at the OMEGA Laser Facility \cite{boehly-omega-optcomm-1997} show good agreement between measured reflectivity and \ftd{} predictions, while \textsc{deplete}{} gives a lower value. This is due to the speckle pattern of the phase plate smoothed lasers. Sec.\ \ref{s:nif} describes one approximate way to bound the speckle enhancement by doubling the coupling coefficient; the resulting \textsc{deplete}{} reflectivity always exceeds the experimental level. A more sophisticated idea for handling speckles is outlined in the conclusion. Additional beam smoothing, like polarization smoothing (PS) and smoothing by spectral dispersion (SSD), reduce the effective speckle intensity and can reduce the reflectivity even below the speckle-free \textsc{deplete}{} level. The paper is organized as follows. Section \ref{s:gov} derives the governing equations for the pump intensity and scattered-wave spectral density. Our ray-based methodology and model limits are discussed in Sec.\ \ref{s:ray}. The numerical method is given in Sec.\ \ref{s:num}, including a quasi-analytic solution for the coupling-Thomson step. Section \ref{s:bench} compares \textsc{deplete}{} with \textsc{newlip}{} linear gains and \ftd{} ``plane-wave'' simulations on prescribed profiles. The relationship between Thomson scattering and linear gain is discussed in Sec.\ \ref{s:thom}. In Sec.\ \ref{s:omsbs} we compare \textsc{deplete}{} to the experimental and \ftd{} SBS reflectivities in recent OMEGA shots. Sec.\ \ref{s:nif} presents \textsc{deplete}{} analysis of an ignition design with a 285 eV radiation temperature for the National Ignition Facility (NIF) \cite{paisner-nif-fustech-1994}. In particular, we show the effect of scattered light re-absorption and put a bound on speckle enhancement. We conclude and discuss future prospects in Sec.\ \ref{s:conc}. A review of \textsc{newlip}{} and its linear gain is presented in Appendix A. Appendix B details the numerics of \textsc{deplete}'s coupling-Thomson step. \section{Governing equations} \label{s:gov} We derive coupled-mode equations, in time and one space dimension, for the slowly-varying wave envelopes, and find the resulting intensity equations. We do this for the light waves first, and then the plasma wave in the strong damping limit. Since our approach is standard we summarize some steps. We take these equations in steady state to apply independently at each scattered frequency, and transition to a spectrum of scattered light per angular frequency. This may be viewed as a ``completely incoherent'' treatment of the scattered light at different frequencies. Bremsstrahlung damping and fluctuations, and TS, are then added phenomenologically. Focusing of the whole beam is finally accounted for, giving the system \textsc{deplete}{} solves. This section culminates in the \textsc{deplete}{} system, Eqs.\ (\ref{eq:I0gov}-\ref{eq:i1gov}), on which some readers may wish to focus. \subsection{light-wave action equations} Let $z$ be distance along the profile, and assume all wave vectors and gradients are in $z$ ($\partial_x=\partial_y=0$). $z=0$ is taken as the left edge of the domain (the ``laser entrance''), where we specify the right-moving pump laser; we also specify boundary values for the left-moving backscattered wave at the right edge $z=L_z$. The light waves are linearly polarized in $y$ and represented by their vector potentials $\vec A_i = (1/2)A_i(z,t)\hat ye^{i\psi_i}+cc$, where $i=0,1$ for the pump and scattered wave, respectively. $A_i$ is the slowly-varying complex envelope, and we use the dimensionless $a_i \equiv eA_i/m_ec$. $\psi_i(z,t)$ is the rapidly-varying phase with $k_i\equiv\partial_z\psi_i$ and $\omega_i\equiv-\partial_t\psi_i$. Let $\sigma_i\equiv k_i/|k_i|$ with $\sigma_0=\sigma_2=+1$ and $\sigma_1=-1$ (appropriate for backscatter). Thermal fluctuations give rise to both light waves and plasma waves. However, upon appropriate averaging the \textit{field amplitudes} of these fluctuations vanish (but their \textit{mean squares} do not). The amplitudes $A_i$ (and $n_{j2}$ below) represent only the coherent, and not the noise, components of the fields. We insert a bremsstrahlung noise source and TS to the intensity equations below. From the Maxwell equations, and conservation of canonical transverse momentum $m_ev_{ye}=eA_y$, we find $A_y=(\vec A_0+\vec A_1)\cdot\hat y$ satisfies \begin{equation} \label{eq:Ay} \left[ \partial_{tt}-c^2\partial_{zz}+\omega_{pe}^2 \right] A_y = -\omega_{pe}^2 {n_{e2}\over n_e}A_y. \end{equation} $\tilde n_j=n_j+N_{j2}$ is the total number density for species $j$ ($j=e$ for electrons, $i$ for an ion species), $N_{j2}=(1/2)n_{j2}e^{i\psi_2}+cc$, and $n_{j2}$ is the slowly-varying plasma-wave envelope. We define $\omega_{pj}\equiv[n_jZ_j^2e^2/\epsilon_0m_j]^{1/2}$, $v_{Tj}\equiv[T_j/m_j]^{1/2}$ and $\lambda_{Dj}\equiv v_{Tj}/\omega_{pj}$, with $Z_j$ the charge state. As usual, the massive ions are treated as fixed in the transverse current. (We look forward to a circumstance where a positively-charged species must be considered mobile, such as an electron-positron plasma!) Following, e.g., Ref.~\cite{dewandre-wshift-pof-1981}, we introduce the small parameter $\delta \sim \omega_i^{-1} \partial_t \ln X \sim k_i^{-1} \partial_x \ln X$ for $X=A_i,k_i,$ etc. We order $\partial_t,\partial_x \sim\delta$, $\psi_i\sim\delta^{-1}$, and the right-hand side of Eq.~(\ref{eq:Ay}) $\sim\delta$. To order $\delta^0$, we obtain the free-wave dispersion relation \begin{equation} \omega_i^2 = \omega_{pe}^2 + c^2k_i^2 \qquad i=0,1. \end{equation} For the steady-state conditions considered below we take $\omega_i$ to be constant and find the eikonal $ck_i(x)=\sigma_i\eta_i\omega_i$ with $\eta_i\equiv [1-n_e/n_{ci}]^{1/2}$ and $n_{ci}\equiv\omega_i^2\epsilon_0m_e/e^2$ the critical density of mode $i$. Also, the group velocity is $v_{gi}\equiv \sigma_i\eta_ic$. Assuming perfect phase matching ($k_0=k_1+k_2$, $\omega_0=\omega_1+\omega_2$), the resonant order $\delta$ terms in Eq.~(\ref{eq:Ay}) yield the envelope equations: \begin{eqnarray} \label{eq:L0a0} L_0 a_0 &=& -{i\over4}{\omega_{pe}^2\over\omega_0}{n_{e2}\over n_e}a_1, \\ \label{eq:L1a1} L_0 a_1 &=& -{i\over4}{\omega_{pe}^2\over\omega_1}{n_{e2}^*\over n_e}a_0. \end{eqnarray} The operator $L_i \equiv \partial_t + v_{gi}\partial_z + (1/2\omega_i)(\partial_t\omega_i+c^2\partial_zk_i)$. Our quasi-monochromatic light waves ($i=0,1$) have action density \cite{bers-leshouches} $N_i\equiv (m_e/8\pi r_e)\omega_ia_ia_i^*$ where $r_e\equiv e^2/4\pi\epsilon_0m_ec^2\approx2.82$ fm. We also define the (positive) action flux $Z_i\equiv N_i|v_{gi}|$ and intensity $I_i\equiv \omega_iZ_i$. In practical units, \begin{equation} \label{eq:aiIi} |a_i|^2 = {I_i\lambda_i^2 \over P_{em} \eta_i} \end{equation} where $\lambda_i\equiv 2\pi c/\omega_i$ and $P_{em}\equiv (\pi/2)m_ec^3/r_e$ $\approx 1.37\times10^{18}$ W$\cdot$cm$^{-2}\cdot\mu$m$^2$. We form Eq.~(\ref{eq:L0a0})$\times a_0^* + cc$ and Eq.~(\ref{eq:L1a1})$\times a_1^* + cc$ to find \begin{eqnarray} -\partial_tN_0 - \partial_zZ_0 &=& \partial_tN_1 - \partial_zZ_1 = J \\ J &\equiv& -{1\over4}m_ec^2\ \im[a_0^*a_1n_{e2}]. \end{eqnarray} \subsection{plasma-wave action equations} We describe the plasma waves following the dielectric operator approach of Cohen and Kaufman \cite{cohen-drivenepw-pof-1977}: \begin{eqnarray} \label{eq:epn2} \epsilon(\omega_2'+i\partial_t,k_2-i\partial_z) n_2\ &=& n_\mathrm{pnd}} \newcommand{\re}{\mathrm{Re}, \\ n_\mathrm{pnd}} \newcommand{\re}{\mathrm{Re} &\equiv& \chi_e(\omega_2',k_2){c^2k_2^2 \over 2\omega_{pe}^2} n_e a_0 a_1^*. \end{eqnarray} The charge-density fluctuation $n_2 \equiv -n_{e2} + \sum_iZ_in_{i2}$ experiences a ponderomotive drive $n_\mathrm{pnd}} \newcommand{\re}{\mathrm{Re}$. $\omega_2'\equiv\omega_2-\vec k_2\cdot\vec u$ is the Doppler-shifted plasma-wave frequency in the frame of the plasma flow $\vec u$ ($\omega_2$ is in the lab frame). $\epsilon\equiv 1+\chi$ is an operator, where the time and space derivatives reflect envelope evolution and $\chi\equiv\sum_j\chi_j$ is the total susceptibility. $\chi_e$ in $n_\mathrm{pnd}} \newcommand{\re}{\mathrm{Re}$ is simply a function, not an operator. $\chi_j$ is the (linear) kinetic, collisionless susceptibility of Maxwellian species $j$: \begin{equation} \chi_j \equiv -{1\over 2k_2^2\lambda_{Dj}^2}Z'(\zeta_j); \qquad \zeta_j \equiv {\omega_2'\over k_2v_{Tj}\sqrt2}. \end{equation} $Z(\zeta) \equiv i\pi^{1/2}e^{-\zeta^2}\mathrm{erfc}} \newcommand{\ftd}{p\textsc{f3d}(-i\zeta)$ is the plasma dispersion function \cite{fried-zfunc-1961} and $\mathrm{erfc}} \newcommand{\ftd}{p\textsc{f3d}$ is the complimentary error function \cite{absteg}. Gauss's law relates $n_2$ and $n_{j2}$: \begin{eqnarray} \label{eq:ne2n2} n_{e2} &=& -(1+\chi_I)n_2, \\ n_{i2} &=& -\chi_i \left( {1 \over Z_i} + {m_e \over m_i}{\epsilon \over \chi_e} \right) n_2 \\ &\approx& -{\chi_i \over Z_i} n_2, \end{eqnarray} with $\chi_I \equiv \sum_i\chi_i$. For SRS, where the ion motion is negligible, we usually take $1+\chi_I\rightarrow 1$ to save computing time. Expanding $\epsilon$ for slow envelope variation, and retaining only $\epsilon_r\equiv\re\,\epsilon$ in the derivatives, gives \begin{equation} \left[ \partial_t + v_{g2} \partial_z + \nu_2 + i\delta\omega_2 \right] n_2 = -i {n_\mathrm{pnd}} \newcommand{\re}{\mathrm{Re} \over \dot\epsilon}. \end{equation} $\dot\epsilon \equiv \partial\epsilon_r/\partial\omega_2'$, $\epsilon'\equiv \partial\epsilon_r/\partial k_2$, $v_{g2} \equiv - \epsilon'/\dot\epsilon$ is the plasma-wave group velocity, $\nu_2 \equiv \im[\epsilon]/\dot\epsilon$ is the damping rate, and $\delta\omega_2 \equiv -\epsilon_r/\dot\epsilon$ is the phase detuning. We now assume the plasma wave is in the strong damping limit, where its advection is neglected: $|v_{g2}\partial_zn_2| \ll |\nu_2+i\delta\omega_2| |n_2|$. This implies the instability is below its absolute threshold so that steady-state solutions are accessible. Also going to steady-state, we find \begin{equation} \label{eq:n2npnd} \epsilon(\omega_2,k_2)n_2 = n_\mathrm{pnd}} \newcommand{\re}{\mathrm{Re}. \end{equation} Replacing $n_{e2}$ via Eqs.~(\ref{eq:ne2n2}) and (\ref{eq:n2npnd}) yields \begin{equation} \label{eq:J2} J = \omega_0\tilde\Gamma_1 Z_0Z_1. \end{equation} The coupling coefficient $\tilde\Gamma_1$ is \begin{eqnarray} \tilde\Gamma_1 &\equiv& \Gamma_S\im\left[{\chi_e\over \epsilon}(1+\chi_I) \right] \label{eq:Gam1} = {\Gamma_Sg_\Gamma \over |\epsilon|^2}, \label{eq:Gam1res} \\ \Gamma_S &\equiv& {2\pi r_e \over m_ec^2}{1\over\omega_0}{k_2^2 \over k_0|k_1|}, \\ g_\Gamma &\equiv& |1+\chi_I|^2\im\chi_e + |\chi_e|^2\im\chi_I. \end{eqnarray} The second form of $\tilde\Gamma_1$ exhibits the resonance for $|\epsilon|\ll 1$. The over-tilde on $\tilde\Gamma_1$ indicates it will be modified below to account for beam focusing. $\tilde\Gamma_1$, and thus $J$, are usually positive. We now have a closed system for modes 0 and 1, with no independent equation for mode 2: \begin{eqnarray} \partial_tN_0 + \partial_zZ_0 &=& -\omega_0\tilde\Gamma_1 Z_0Z_1, \label{eq:NZ0} \\ \partial_tN_1 - \partial_zZ_1 &=& \omega_0\tilde\Gamma_1 Z_0Z_1. \label{eq:NZ1} \end{eqnarray} \subsection{Steady-state equations for a spectrum of scattered waves} We transition to steady state ($\partial_t=0$) and work with intensities. Since we have assumed $\partial_z\omega_i=0$, we multiply Eq.\ (\ref{eq:NZ0}) by $\omega_0$ and Eq.\ (\ref{eq:NZ1}) by $\omega_1$ to obtain \begin{eqnarray} d_zI_0 &=& -{\omega_0\over\omega_1}\tilde\Gamma_1 I_0I_1, \label{eq:I0} \\ -d_zI_1 &=& \tilde\Gamma_1 I_0I_1. \label{eq:I1} \end{eqnarray} Here and elsewhere, $d_xf(x)$ denotes the ordinary derivative of a function of one variable, while $\partial_xf$ denotes the partial derivative of a function of several variables. The bremsstrahlung source and TS are expressed in terms of spectral density $i_1(z,\omega_1)$ (intensity per angular frequency). The scattered intensity is then $I_1=\int d\omega_1\, i_1$. We take Eq.\ (\ref{eq:I1}) to apply independently at each $\omega_1$, and integrate the coupling term in Eq.\ (\ref{eq:I0}), to find \begin{eqnarray} d_zI_0 &=& -\int d\omega_1{\omega_0\over\omega_1}\tilde\Gamma_1 I_0i_1, \label{eq:iI0} \\ -\partial_zi_1 &=& \tilde\Gamma_1 I_0i_1. \label{eq:ii1} \end{eqnarray} This is a totally incoherent treatment of the scattered light at different frequencies, and is unrealistic to the extent there is spectral ``leakage'' between nearby $\omega_1$ intervals due to, e.g., envelope evolution. \subsection{Bremsstrahlung source and damping} We incorporate electron-ion inverse-bremsstrahlung light-wave damping ($\kappa_0$ and $\kappa_1$) phenomenologically for modes 0 and 1, as well as bremsstrahlung noise ($\tilde\Sigma_1$) for mode 1, to find \begin{eqnarray} d_zI_0 &=& -\kappa_0I_0 -\int d\omega_1{\omega_0\over\omega_1}\tilde\Gamma_1 I_0i_1, \\ -\partial_zi_1 &=& -\kappa_1i_1 + \tilde\Sigma_1 +\tilde\Gamma_1 I_0i_1. \label{eq:i1brem} \end{eqnarray} As for $\tilde\Gamma_1$, the over-tilde on $\tilde\Sigma_1$ denotes it will be modified due to focusing. $I_0$ and $i_1$ represent integrals over solid angles in $k$ space, which we now specify. Absolute solid angles are needed in the noise sources, and cannot be simply scaled away, because scattered intensities determine pump depletion. We follow closely Bekefi's book \cite{bekefi-radiation-1966} in this section. We take $I_i = \Omega_i I_{i,\Omega}$ for $i=0,1$ (see Secs.\ 1.6 and 1.7 of Bekefi). $I_{i,\Omega}$ is the intensity per solid angle interval $d\Omega$ in $k$ space, which we assume is constant over the solid angle $\Omega_i$ that participates in the scattering. $\Omega_i$ is the local (in $z$) solid angle in the plasma, which we express in terms of a cone half-angle $\theta_{p,i}$ as \begin{equation} \Omega_i \equiv 2\pi(1-\cos\theta_{p,i}). \end{equation} From Snell's law, $\theta_{p,i}$ varies with $z$ according to \begin{equation} \cos\theta_{p,i} = \begin{cases} 0 \quad \mathrm{if}\ n_e \geq n_{ci}\cos^2\theta_v \\ [1-\eta_i^{-2}\sin^2\theta_v]^{1/2} \quad \mathrm{otherwise}. \end{cases} \end{equation} $n_{ci}\cos^2\theta_v$ is the ``critical density'' above which we cut off backscatter ($\tilde\Gamma_1=\tilde\Sigma_1=\kappa_1=0$). $\theta_v$ is a ``vacuum'' cone angle, which we find from the solid angle in the beam's F-cone (for simplicity we use the same solid angle for pump and scattered light). This is reasonable if the scattering mostly occurs in laser speckles that are near diffraction-limited. In terms of laser optics F-number $F$, \begin{eqnarray} \cos\theta_v &\equiv& \left[ 1+{1\over 4F^2} \right]^{-1/2} \approx 1-{1\over8F^2}, \\ \Omega_i^v &\equiv& 2\pi(1-\cos\theta_v)\approx{\pi\over 4F^2}. \end{eqnarray} The approximate forms apply for $F\gg1$. The upshot of the solid angle discussion (see especially Eq.\ (1.133) of Bekefi) is \begin{equation} \tilde\Sigma_1 = \Omega_1 j(\omega_1), \end{equation} where $j(\omega)$ is the emission coefficient, per $d\Omega$ and in one polarization (see p.\ 134 of Bekefi): \begin{equation} j(\omega_i) = {\eta_i \over 12\pi^3\sqrt{2\pi}} {\omega_{pe}^4 \over v_{Te}} {m_er_e \over c} \sum_{j\in\mathrm{ions}}{n_j\over n_e}Z_j^2\ln\Lambda_{ej}. \end{equation} $\ln\Lambda_{ej}$ is sometimes called the Gaunt factor and resembles the Coulomb logarithm, although it arises in calculations \textit{without} ad hoc cutoffs on impact parameter integrals (see Chap.\ 3 of Bekefi). For the case $\omega_i>\omega_{pe}$, Bekefi finds $\Lambda_{ej}=v_{Te}/(\omega_i\bmin)$ where \begin{equation} \bmin = \begin{cases} {\gamma \over 4}{\hbar \over \sqrt{m_eT_e}} &\mathrm{if}\ T_e>77Z_j^2\ \mathrm{eV}, \\ \left({\gamma \over 2}\right)^{5/2} Z_jr_e{m_ec^2 \over T_e} &\mathrm{otherwise}. \end{cases} \end{equation} The first, high-$T_e$ case typically applies for hohlraum conditions. The numerical pre-factors come from a detailed binary-collision calculation, and $\gamma=e^C\approx 1.781$ where $C\approx0.577$ is the Euler-–Mascheroni constant. Our expression for $j$ does not include the enhanced emission for $\omega_i\approx\omega_{pe}$ due to collective effects \cite{dawson-emission-pof-1962}. We find the absorption coefficient $\kappa_i$ via Kirchoff's law (see Bekefi Sec.\ 2.3): \begin{equation} \kappa_i = {\Omega_i \over \Omega_i^v} {j(\omega_i) \over B_v(\omega_i)}. \end{equation} Our $\kappa_i$ equals Bekefi's $\alpha_\omega$. $B_v$ is the vacuum blackbody spectrum for one polarization, with units $dI/(d\omega\, d\Omega)$: \begin{eqnarray} B_v(\omega) &\equiv& {\hbar \over 8\pi^3c^2} {\omega^3 \over e^{\hbar\omega/T_e}-1} \\ &\approx& {\omega^2T_e \over 8\pi^3c^2} \qquad \hbar\omega \ll T_e. \end{eqnarray} $j$ given above was found for collision durations short compared to the light-wave period, which entails the Jeans limit $\hbar\omega \ll T_e$. We therefore use the approximate form of $B_v$ to obtain \begin{equation} \kappa_i = {\sqrt2 \over 3\sqrt\pi} {\Omega_i \over \Omega_i^v} {r_ec\eta_i\over\omega_i^2} {\omega_{pe}^4 \over v_{Te}^3} \sum_{j\in\mathrm{ions}}{n_j\over n_e}Z_j^2\ln\Lambda_{ej}. \end{equation} For an optically thick plasma ($\partial_zi_1=0$) with no pump ($I_0=0$), we obtain for $i_1$ from Eq.\ (\ref{eq:i1brem}) the fluctuation level \ensuremath{i_1^\mathrm{OT}}: \begin{equation} \label{eq:ot} \ensuremath{i_1^\mathrm{OT}} \equiv {\Sigma_1\over\kappa_1} = {\Omega_1^v \over f} B_v(\omega_1). \end{equation} $f$ and $\Sigma_1$ are defined in Sec.\ \ref{s:focus}. We thus recover the blackbody spectrum, required by Kirchoff's law. The factor $\eta_1^2$ that usually appears in the blackbody spectrum in a plasma is absent due to our treatment of solid angles. \subsection{Thomson scattering} Thomson scattering (TS) refers to scattering off plasma-wave fluctuations resulting from particle discreteness (\cite{oberman-fluct-hpp}, p.~308). Had we retained a separate plasma wave equation, the fluctuations would appear in it as \v{C}erenkov emission \cite{berger-srsnoise-pofb-1989}. It is an important noise source for backscatter, especially for SBS. We express $\Delta p_1$, the TS scattered power increment per $d\omega_1$ per $d\Omega_1$ ($k_1$ solid angle), within a thin slab of width $\Delta z$, as \begin{eqnarray} \label{eq:Dp1} \Delta p_1 &=& {d\sigma \over d\omega_1d\Omega_1} I_0 \\ {d\sigma \over d\omega_1d\Omega_1} &=& n_eA(z)\Delta z \psi r_e^2 {S\over 2\pi}. \end{eqnarray} $A(z)$ is the beam area, defined in Sec.\ \ref{s:focus}. $\psi \equiv 1-\sin^2\theta_s\sin^2\theta_a$ is a geometric factor. $\theta_s$ is the angle between $\vec k_0$ and $\vec R$, the vector from source to ``observation point''. For a beam with large $F$, $\theta_s \sim \theta_v \ll 1$. $\theta_a$ is the angle between $\vec R$ and the pump polarization. We usually take $\psi=1$. The form factor $S$ (units of time) is from Eq.~(138) of Ref.\ \cite{oberman-fluct-hpp}, valid for arbitrary (non-Maxwellian) distributions, generalized to multiple ion species: \begin{eqnarray} {|\epsilon|^2 \over 2\pi}S(\vec k,\omega) &=& |1+\chi_I|^2F_e + |\chi_e|^2\sum_{j\in\mathrm{ions}}{n_j\over n_e}Z_j^2 F_j \\ F_j &\equiv& \int d^3v\ f_j(\vec v)\delta(\omega+\vec k\cdot\vec v). \end{eqnarray} $f_j$ is the distribution function of species $j$ $(\int d^3v\ f_j=1)$. For a Maxwellian, \begin{equation} F_j={1\over kv_{Tj}\sqrt{2\pi} } e^{-\zeta_j^2} = {(k\lambda_{Dj})^2 \over \pi\omega} \im\chi_j, \end{equation} and \begin{equation} {\omega|\epsilon|^2 \over 2(k\lambda_{De})^{2}} S = g_\tau \equiv |1+\chi_I|^2\im\chi_e + |\chi_e|^2\sum_{j\in\mathrm{ions}}{T_j\over T_e}\im\chi_j. \end{equation} This form agrees with the multiple-ion result in Eq.~(3) of Ref.~\cite{evans-thom-ppcf-1970}. Henceforth we assume Maxwellian distributions. From Eq.\ (\ref{eq:Dp1}) we form a differential equation for $i_1$ that describes TS: \begin{equation} \left. \partial_zi_1 \right|_{TS} = \tau_1I_0 \qquad \tau_1 \equiv {\Omega_1 \over AI_0} {\Delta p_1 \over \Delta z}. \end{equation} Since TS transfers energy from the pump to the scattered waves, we include it in both equations: \begin{eqnarray} d_zI_0 &=& -\kappa_0I_0 - \int d\omega_1 {\omega_0\over\omega_1}I_0(\tau_1 + \tilde\Gamma_1 i_1), \label{eq:thom0} \\ -\partial_zi_1 &=& -\kappa_1i_1 + \tilde\Sigma_1 + I_0(\tau_1 + \tilde\Gamma_1i_1). \label{eq:thom1} \end{eqnarray} For conevience we write $\tau_1$ as \begin{eqnarray} \tau_1 &=& \Omega_1n_er_e^2\psi {S(k_2,\omega_2') \over 2\pi} = {\tau_Sg_\tau \over |\epsilon|^2}, \\ \tau_S &\equiv& {\Omega_1\psi \over \pi}n_er_e^2{(k_2\lambda_{De})^2 \over \omega_2'}. \end{eqnarray} $\tau_1$ is always positive, while $\tau_S$ and $g_\tau$ have the same sign as $\omega_2'$ (which can be negative for IAW's when the plasma flow is supersonic along $\vec k_0$). It is useful to note that $i_\tau\equiv \tau_1/\Gamma_1$ sometimes plays the role of an effective seed level for $i_1$: \begin{equation} \label{eq:itau} i_\tau \equiv {\tau_1 \over \Gamma_1} = {\tau_Sg_\tau \over \Gamma_Sg_\Gamma}. \end{equation} For the special case $T_i=T_e$, we have $g_\tau=g_\Gamma$ and $i_\tau$ is independent of $\chi_j$: \begin{equation} \label{eq:itauTeTi} i_\tau = {\tau_S\over\Gamma_S}={\Omega_1\psi\over(2\pi)^3}{\omega_0\over\omega_2'}T_ek_0|k_1|, \qquad T_i=T_e. \end{equation} This fact is used in Sec.\ \ref{s:thom} to discuss the relation of TS to linear gain. \subsection{Whole-beam focusing} \label{s:focus} We wish to incorporate the effects of whole-beam focusing in a simple way. The equations as written hold locally in $z$, but do not model focusing. To do this, we treat the transverse intensity patterns of $I_0$ and $I_1$ to be uniform flattops of varying area $A(z)$. The beam focuses at the focal spot $z_F$, where $A$ attains its minimum $A(z_F)$. Let $\tilde I_i \equiv I_i(z)/f(z)$ be the total power at $z$ divided by the focal spot area, with focusing factor $f \equiv A(z_F)/A(z) \leq1$. We typically employ for $f$ the result for the on-axis intensity of a gaussian beam \cite{milonni-lasers-1988}: \begin{equation} f = [1 + (z-z_F)^2/z_0^2]^{-1} \end{equation} where $z_0$ is an effective Rayleigh range. For a Gaussian beam with optics F-number $F$, $z_0=(4/\pi)\lambda F^2$. This form approximately fits the random phase plate (RPP) smoothed beams designed for NIF (for an appropriate $z_0$). Substituting $(I_0,i_1) =f \cdot(\tilde I_0,\tilde i_1)$ into Eqs.~(\ref{eq:thom0}-\ref{eq:thom1}), and freely commuting $f$ with $\partial_z$, yields the principal equations solved by \textsc{deplete}: \begin{eqnarray} d_zI_0(z) &=& -\kappa_0I_0 - I_0\int d\omega_1\ {\omega_0\over\omega_1}(\tau_1 + \Gamma_1i_1) \label{eq:I0gov}, \\ \partial_zi_1(z,\omega_1) &=& \kappa_1i_1 -\Sigma_1 - I_0(\tau_1 + \Gamma_1i_1). \label{eq:i1gov} \end{eqnarray} $\Gamma_1 \equiv f\tilde\Gamma_1$ and $\Sigma_1\equiv f^{-1}\tilde\Sigma_1.$ In Eqs.~(\ref{eq:I0gov}-\ref{eq:i1gov}) and henceforth, all $I_i$ and $i_1$ are understood to have suppressed over-tildes, that is, to refer to total transverse powers over focal-spot area. Similarly, the plasma-wave amplitude from Eq.\ (\ref{eq:n2npnd}) can be written \begin{equation} {n_2 \over n_e} = {1\over2}{\chi_e\over\epsilon} \left[{ck_2\over \omega_{pe}}\right]^2 f\ \tilde a_0 \tilde a_1^* \end{equation} with $\tilde a_i^2\equiv \tilde I_i\lambda_i^2/(P_{em}\eta_i)$; see Eq.\ (\ref{eq:aiIi}). All symbols in Eqs.~(\ref{eq:I0gov}-\ref{eq:i1gov}) are positive, except $\Gamma_1$ may be negative for SBS in case $\omega_2'<0$. This corresponds to the scattered wave having a higher frequency than the pump, in the plasma frame. The scattered wave then gives energy to the pump, and \textsc{deplete}{} handles this situation correctly. \section{Ray methodology and model limits} \label{s:ray} \textsc{deplete}{} calculates LPI along given plasma conditions for a 1D profile. A typical application is to study a laser beam propagating through conditions given by a rad-hydro simulation. We use many independent rays to model the whole beam, which introduces some statistical inaccuracy. The rays are generally found by tracing 3D refracted paths through the rad-hydro output. Although strictly not a part of \textsc{deplete}{}, this is the major way we utilize geometric-optics rays. Wave-optics effects, such as laser speckles and diffraction (of both the pump and scattered light), are also not included in \textsc{deplete}{}. We present one way to approximate gain enhanacement due to speckles in Sec.\ \ref{s:omsbs}. However, laser intensity is \textit{not} found from a rad-hydro simulation. Such codes generally treat a laser beam as a set of rays, which are absorbed as they trace out refracted paths. The laser intensity in a zone is found by dividing the total power of all rays crossing that zone by its transverse area. This approach suffers from several problems for our purposes, including the fact that intensities remain finite at caustics only due to the finite number of rays and zone size. Instead, we run \textsc{deplete}{} separately for each ray, and use a model for the laser beam to give an initial intensity (at a sufficiently low density that little absorption has occurred) and $z$-dependent focusing factor (generally based on vacuum propagation). The intensity along a \textsc{deplete}{} 1D profile is thus independent of refraction that occurs due to the plasma. Refractive changes in beam intensity occur, for instance, when a beam propagates between two high-density regions. However, our independent-ray treatment has the benefit that caustics pose no problem. \textsc{deplete}{} assumes that the laser and scattered light follow the same path, and thus see the same plasma conditions. The two light waves refract differently if their wavelengths differ, as in SRS, or in SBS for certain transverse plasma flows \cite{hinkel-flow-pop-1999}. The departure of ray paths becomes significant when the two rays see sufficiently different plasma conditions in the gain region for a given wavelength that the coupling or other coefficients differ significantly. This requires sufficiently strong transverse plasma gradients. \section{Numerical method} \label{s:num} We solve the \textsc{deplete}{} system Eqs.\ (\ref{eq:I0gov}-\ref{eq:i1gov}) from the laser entrance $z=0$ to the right edge $z=L_z$. For backscatter (considered in this paper), we give $I_{0L}$ and $i_{1R}(\omega_1)$ as boundary conditions, where $f_L\equiv f(z=0)$ and $f_R\equiv f(z=L_z)$. We solve this two-point boundary value problem via a shooting method, marching from right to left. We guess $I_{0R}$ and solve the initial value problem from $z=L_z$ down to $z=0$, and iterate until the resulting $I_{0L}$ is sufficiently close to the desired value. Because $I_{0R}$ is just one scalar, it is more feasible to shoot on it than on the set of values $i_{1L}(\omega_1)$. Generalizing our approach to 3D, where one would have to shoot on $I_{0R}(x,y)$ over a transverse plane, is much more difficult; a different technique for 3D pump depletion is used in the code \slip{} \cite{froula-lengthlim-prl-2008}. For the right-boundary seed value $i_{1R}$, we either use 0 or the optically-thick \ensuremath{i_1^\mathrm{OT}}{} from Eq.\ (\ref{eq:ot}). The choice seems to have little effect, since volume sources (either TS or bremsstrahlung) typically produce a comparable or larger noise level after a short distance. We solve Eqs.~(\ref{eq:I0gov}-\ref{eq:i1gov}) by operator splitting \cite{strang-splitting-siamjna-1968,yanencko-fracstep-1970}. Let the operator $B$ solve the ``bremsstrahlung'' system \begin{eqnarray} d_zI_0 &=& -\kappa_0I_0 \label{eq:brem0}, \\ \partial_zi_1 &=& \kappa_1i_1 -\Sigma_1 \label{eq:brem1}, \end{eqnarray} and the operator $C$ solve the ``coupling-Thomson'' system \begin{eqnarray} d_zI_0 &=& - I_0\int d\omega_1\ {\omega_0\over\omega_1}(\tau_1 + \Gamma_1i_1), \label{eq:coup0}\\ \partial_zi_1 &=& - I_0(\tau_1 + \Gamma_1i_1) \label{eq:coup1}. \end{eqnarray} To advance the solution from the discrete gridpoint $z^n$ down to $z^{n-1}$ (the decreasing index matches \textsc{deplete}'s right-to-left marching), we first apply $B$ for a half-step, then $C$ for a full step, then $B$ for a half-step again. The splitting theorem guarantees that if $B$ and $C$ are second-order accurate operators, then the overall step is second-order accurate. Schematically, a complete step is \begin{equation} \{I_0,i_1\}^{n-1} = B_{1/2}C_1B_{1/2} \{I_0,i_1\}^n. \label{eq:BCBstep} \end{equation} In usual applications we are given plasam conditions, and thus the coefficients in the \textsc{deplete}{} equations, only at a discrete set of points $\{z^n\}$. We use linear interpolation to find the coefficients at the needed intermediate points, as shown below. We stress that the numerical accuracy of \textsc{deplete}{} is strongly influenced by the quality of the given plasma conditions. \subsection{The bremsstrahlung step $B$} $B$ must solve Eqs.~(\ref{eq:brem0}-\ref{eq:brem1}) with $\kappa_i$ and $\Sigma_1$ constant, to at least second-order accuracy. This linear system is readily solved analytically. Since there are two ``half-steps'' of $B$ in Eq.\ (\ref{eq:BCBstep}), we consider a generic step of size $\Delta z$ with initial conditions $\{I_0,i_1\}^1$, yielding new values $\{I_0,i_1\}^0$ . $X^{1/2}=(X^0+X^1)/2$ denotes the zone-centered value of some quantity $X$. If $\kappa_1^{1/2}\neq0$, we find \begin{eqnarray} I_0^0 &=& I_0^1\exp[\kappa_0^{1/2}\Delta z], \label{eq:bremsol0} \\ i_1^0 &=& (i_1^1-i_1^{\mathrm{OT},1/2})\exp[-\kappa_1^{1/2}\Delta z] + i_1^{\mathrm{OT},1/2} \label{eq:bremsol1}. \end{eqnarray} Eq.~(\ref{eq:bremsol1}) applies separately at each $\omega_1$. For the special case $\kappa_1^{1/2}=0$, Eq.~(\ref{eq:bremsol1}) is replaced with \begin{equation} i_1^0 = i_1^1 + \Sigma_1^{1/2}\Delta z \qquad (\kappa^{1/2}=0). \label{eq:bremsol1a} \end{equation} The rightmost $B$ in Eq.~(\ref{eq:BCBstep}) advances the system from $z^n$ to $z^{n-1/2}$. Accordingly, for this step, the needed coefficients in Eqs.~(\ref{eq:bremsol0}-\ref{eq:bremsol1a}) are interpolated at 1/4 the way from $z^n$ to $z^{n-1}$: $X^{1/2} = [(1/4)X^{n-1}+(3/4)X^n]$ . Similarly, the leftmost $B$ in Eq.~(\ref{eq:BCBstep}) advances the system from $z^{n-1/2}$ to $z^{n-1}$ and uses $X^{1/2} = [(3/4)X^{n-1}+(1/4)X^n]$. In both cases $\Delta z=(z^n-z^{n-1})/2$. \subsection{The coupling-Thomson step $C$} We now turn to the $C$ operator. $I_0$ is evolved via a conservation law of the $C$ system, Eqs.\ (\ref{eq:coup0}-\ref{eq:coup1}): \begin{equation} d_z\left[ I_0 - \int d\omega_1\ {\omega_0\over\omega_1}i_1 \right]=0. \end{equation} On the discrete $z$ grid, this gives \begin{equation} I_0^{n-1} = I_0^n + \int d\omega_1\ {\omega_0\over\omega_1}(i_1^{n-1}-i_1^n). \end{equation} Before doing this, we must advance $i_1$ using Eq.~(\ref{eq:coup1}) with constant $I_0=I_0^n$ (that is, we neglect pump depletion within a zone). This gives rise to a numerical challenge. Namely, the coefficients $\tau_1$ and $\Gamma_1$ are both proportional to $|\epsilon|^{-2}$, and contain a narrow resonance where $\re\,\epsilon=0$ if $\im\,\epsilon$ is small (that is, where the beating of the light waves drives a natural plasma wave). Integrating through these sharp peaks with a standard ODE method like Runge-Kutta performs very poorly unless the resonance is well-resolved by the $z$ grid (which it usually is not). To alleviate this problem, the key observation is that $\epsilon$ itself varies slowly in space, even though $|\epsilon|^{-2}$ varies rapidly near resonance. We can therefore represent $\epsilon$ as linearly varying with $z$ across a cell, and analytically solve the resulting system. We merely quote the result here, and refer the reader to Appendix B for the derivation and definition of the relevant quantities: \begin{equation} \label{eq:i1CTsol} i_1^{n-1} = (i_1^n+i_\tau)e^{B_\Gamma \Delta w_n}-i_\tau. \end{equation} \section{Benchmark on linear profiles} \label{s:bench} This section compares the results of \textsc{deplete}{} with those of \textsc{newlip}{} and \ftd{} on two contrived profiles with weak linear gradients, one for SRS and another for SBS. \textsc{deplete}{} and \ftd{} embody quite different physical models, each with their own approximations and limitations. One can view their favorable comparison here as a ``cross-validation'' of these models in a regime where they should agree. To compare with the \textsc{newlip}{} linear gain $G_l$ (see Appendix A), we need a noise level against which to compare the \textsc{deplete}{} scattered spectrum at the laser entrance, $i_{1L}$. For this noise level we choose $i_1^{br}$ at $z=0$, given by solving Eq.~(\ref{eq:i1gov}) with just the bremsstrahlung terms ($I_0\rightarrow0$): \begin{equation} \partial_zi_1^{br} = \kappa_1i_1^{br}-\Sigma_1. \end{equation} This is exactly Eq.~(\ref{eq:brem1}). We then introduce the \textsc{deplete}{} gain $G_d$: \begin{equation} G_d \equiv \ln {i_{1L} \over i_{1L}^{br}} = {\mathrm{``scattering''} \over \mathrm{``noise''}}, \end{equation} where $i_{1L}$ is the solution to the full \textsc{deplete}{} equations. $G_l$ and $G_d$ are exactly equal under the following conditions: there is no pump depletion, no TS ($\tau_1=0$), no absorption of scattered light ($\kappa_1=0$), and no volume bremsstrahlung noise ($\Sigma_1=0$); the only seeding in \textsc{deplete}{} is then via the boundary values $i_{1R}(\omega_1)$. \subsection{SRS benchmark} \label{s:benchsrs} The spatial profiles of our SRS benchmark plasma conditions are shown in Fig.\ \ref{f:srsprof}. We use a profile length $L_z=510\lambda_0$, pump vacuum wavelength $\lambda_0=(1054/3)$ nm, fully-ionized H ions with $T_i=$ 1 keV, and no plasma flow ($\vec u=0$). In both the \textsc{deplete}{} and \ftd{} runs of this section, SBS was not included. Fig.\ \ref{f:srsref} plots the resulting reflectivities for several pump strengths. Although these are all above the homogeneous absolute instability threshold of $I_0^{ab}\approx 0.21$ PW/cm$^2$, the time-dependent \ftd{} runs rapidly approach a steady state and show no signs of a temporally-growing mode \footnote{The homogeneous absolute instability threshold $I_0^{ab}$ is such that the undamped amplitude growth rate $\gamma_0(I_0^{ab})$ satisfies $\gamma_0=(1/4)|v_{g1}v_{g2}|^{1/2}(\kappa_1+\kappa_2)$ where $\kappa_2\equiv2\nu_2/v_{g2}$ is the plasma-wave spatial energy damping rate.}. The weak gradients, or incoherent noise source, may lead to stabilization. After increasing exponentially with $I_{0L}$ for weak pumps, the reflectivity rolls over. This saturation due to pump depletion is generic for three-wave interactions in the strong damping limit, as demonstrated analytically by Tang \cite{tang-sbs-jap-1966}. We compare the gains $G_l$ and $G_d$ from \textsc{newlip}{} and \textsc{deplete}{}, for several pump strengths, in Fig.\ \ref{f:srsG}. The general shapes of the gains are quite close, although their absolute levels differ. For the weakest pump strength, where pump depletion plays little role (as can be inferred from the reflectivity plot in Fig.\ \ref{f:srsref}), the peak $G_d$ is slightly higher than $G_l$. This is due to the volume sources in \textsc{deplete}{}, namely TS and bremsstrahlung noise. To illustrate this, we plot $G_d$ found with no Thomson scattering ($\tau_1=0$) as the black dotted curve. It lies between the two other curves near the peak, and overlaps $G_l$ away from the peak. The curves for the two larger values of $I_{0L}$ in Fig.\ \ref{f:srsG} show $G_d$ to be progressively farther below $G_l$ at peak. This results from pump depletion, which the reflectivity plot clearly shows is significant for $I_{0L} \gtrsim 0.8$ PW/cm$^2$. The bremsstrahlung noise level $i_1^{br}$ varies between (2.4-4.1)$\times10^{-9}$ W/cm$^2$/(rad/sec) over $\lambda_1=$ 650 to 550 nm. \begin{figure} \includegraphics[width=2.75in]{srs_prof.pdf} \caption{Plasma conditions for SRS benchmark.} \label{f:srsprof} \end{figure} \begin{figure} \includegraphics[width=2.3in]{srs_Rcomp.pdf} \caption{(Color online.){} SRS reflectivity vs.\ pump intensity for the SRS benchmark profile of Fig.\ \ref{f:srsprof}. The black circles and red squares are for \ftd{} and \textsc{deplete}{}, respectively.} \label{f:srsref} \end{figure} \begin{figure} \includegraphics[width=3.25in]{srs_Gall.pdf} \caption{(Color online.){} \textsc{deplete}{} gain $G_d$ (black solid), \textsc{newlip}{} gain $G_l$ (red dashed), and $G_d$ with no TS for $I_{0L}=0.4$ PW/cm$^2$ ($\tau_1=0$, black dots), for SRS benchmark. TS and volume bremsstrahlung noise enhance $G_d$ over $G_l$ for the smallest $I_{0L}$, while pump depletion suppresses $G_d$ for the larger two.} \label{f:srsG} \end{figure} We also compared \textsc{deplete}{} to the massively-parallel, paraxial laser propagation code \ftd{} \cite{berger-f3d-pop-1998}. This code solves for the slowly-varying envelopes of the pump laser, nearly-backscattered SRS and SBS light waves, and the daughter plasma waves, in space and time. A carrier $\omega^{en}$ is chosen for each mode (except for the ion acoustic wave), and the corresponding rapid time variations are averaged over. A local eikonal $k^{en}$, given by the appropriate $\omega^{en}$ and dispersion relation with local plasma conditions, contains the rapid space variation. Kinetic quantities, such as Landau damping rates and Thomson cross-sections, are variously found from (linear) kinetic formulas or fluid approximations. There is no bremsstrahlung source, but the pump and scattered light waves all experience inverse-bremsstrahlung damping. The plasma waves undergo Landau damping, and the advection term $v_{g2}\partial_xn_2$ is retained (i.e., they are not treated in the strong damping limit). The noise source in \ftd{} is plasma-wave fluctuations chosen to produce the correct TS level, and uniformly distributed over a square in $k_\perp$ space (corresponding to the transverse $x$ and $y$ directions) extending to half the Nyquist $k$ in both $k_x$ and $k_y$. To replicate the 1D model of \textsc{deplete}{}, we performed ``plane-wave'' simulations in \ftd{}. The incident laser at the $z=0$ entrance plane is uniform in the $x$ and $y$ directions (i.e., there is no structure like speckles), both of which are periodic with size $L_x=L_y=128\lambda_0$ and grid spacing $dx=dy=1.33\lambda_0$. The $z$ spacing is $dz=2\lambda_0$. As described above, the TS noise fills a square in $k_\perp$ space extending to $k_x,k_y=\pm k_{1n}$, with $k_{1n}=(3/16)k_{0v}$ and $k_{0v}\equiv\omega_0/c$. We enveloped the SRS backscattered light around $\omega_1^{en}=0.592\omega_0$ ($\lambda_1$=593.3 nm), which has the highest linear gain. Over the slight variation of our profile, the average $k_1^{en}=0.461k_{0v}$. \textsc{deplete}{} requires a solid angle $\Omega_c$, which we express in terms of an F-number $F$, for TS and bremsstrahlung emission (we excluded the latter for \ftd{} comparisons). Taking $k_1^{en}$ and $k_{1n}$ to determine the focal length and spot radius, one finds $F=k_1^{en}/2k_{1n}=1.23$. The scattered light does not uniformly fill the noise square in $k_\perp$ space, but rather develops into a somewhat hollow ``ring'' with a radius $\approx0.12k_{0v}$ (departing more from a square for stronger pumps); there is some ambiguity in the appropriate $F$ to use. We choose $F=1$, which leads to very close reflectivities for the weakest-pump case shown in Fig.\ \ref{f:srsref}, and is near the noise-square estimate $F=1.23$. Sidescatter at these angles may stress the accuracy of \ftd{}'s paraxial approximation. Figure \ref{f:srsref} shows the \textsc{deplete}{} and \ftd{} SRS reflectivities for the benchmark profile. The \ftd{} values are taken at $t=$39.4 ps, after which time all reflectivities remain roughly constant (the laser ramped from zero to full strength over 10 ps). The agreement is quite good, especially in the linear (weak pump) and the strongly-depleted (strong pump) regimes. This increases confidence in the validity of the different approximations made in both codes. It took about 2 secs of wall time for \textsc{deplete}{} to run on one Itanium CPU, as opposed to 5300 secs on 16 of these CPUs for \ftd{} to advance 10 ps. \subsection{SBS benchmark} We performed an SBS benchmark (with SRS neglected) using the profiles in Fig.\ \ref{f:sbsprof}. The ions were fully-ionized He ($Z=2$, $A=4$) with $T_i=T_e/5$. The parallel flow velocity $u$ is shown normalized to the local acoustic speed $c_a^2\equiv(ZT_e+3T_i)/Am_p$. The pump wavelength and profile length match the SRS benchmark. The SBS reflectivity vs.\ pump strength is plotted in Fig.\ \ref{f:sbsref}, which shows pump depletion for $I_{0L}\gtrsim 1.25$ PW/cm$^2$. We estimate the absolute threshold $I_0^{ab}=2.6$ PW/cm$^2$ and stay below this. We used $F=1.7$ since this gives good agreement with \ftd{} ``plane-wave'' simulations for low $I_0$. However, for larger values of $I_0$ a ring in $k_\perp$ space develops, similar to the SRS runs, and is accompanied by a large increase in reflectivity. Figure \ref{f:sbsG} compares the \textsc{deplete}{} and \textsc{newlip}{} gains, $G_d$ and $G_l$. For the smaller two pumps we see the enhancement of $G_d$ over $G_l$ due to TS (even though pump depletion has set in for the second case $I_{0L}=$ 1.4 PW/cm$^2$), as discussed in Sec.\ \ref{s:thom}. The dotted black curve for $I_{0L}=$ 0.6 PW/cm$^2$ is $G_d$ computed with no TS, and shows the modest increase in $G_d$ stemming from bremsstrahlung volume (as opposed to boundary) noise. The elevated plateau of $G_d$ to the left of the peak is also due to TS. $I_{0L}=$2.5 PW/cm$^2$ gives $G_d<G_l$ due to strong pump depletion. In all cases the wavelength and width of the main peak of the two spectra are similar. $i_1^{br}$, the bremsstrahlung solution, varies slightly from (4.17-4.25)$\times10^{-9}$ W/cm$^2$/(rad/sec) over $\lambda_1-\lambda_0=$ 20 to -3 \AA. \begin{figure} \includegraphics[width=3.1in]{sbs_prof_all.pdf} \caption{SBS benchmark profile.} \label{f:sbsprof} \end{figure} \begin{figure} \includegraphics[width=2.5in]{sbs_R.pdf} \caption{SBS reflectivity for SBS benchmark profile. The squares are \textsc{deplete}{} results, and the dashed line is an extension of the low-$I_{0L}$ results.} \label{f:sbsref} \end{figure} \begin{figure} \includegraphics[width=3.23in]{sbs_Gall.pdf} \caption{(Color online.){} SBS \textsc{deplete}{} gain $G_d$ (black solid), \textsc{newlip}{} gain $G_l$ (blue dashed), and $G_d$ without TS for $I_{0L}=$0.6 PW/cm$^2$ ($\tau_1=0$, black dotted), for SBS benchmark profile.} \label{f:sbsG} \end{figure} \section{The relation of Thomson scattering to linear gain} \label{s:thom} As seen in our benchmark runs, TS leads to an enhancement of the \textsc{deplete}{} gain compared to the \textsc{newlip}{} gain (for negligible pump depletion). This is readily seen via the scattered-wave equation with just coupling and TS, Eq.\ (\ref{eq:coup1}): \begin{equation} \partial_zi_1 = - I_0(\tau_1 + \Gamma_1i_1). \end{equation} We use Eq.\ (\ref{eq:itau}) to obtain \begin{equation} \partial_zi_1 = - \gamma(i_\tau + i_1). \end{equation} $\gamma \equiv I_0\Gamma_1$ is the spatial gain rate. Typically, $\gamma$ has a narrow peak in $z$ at the resonance point, while $i_\tau$ varies slowly. For simplicity, we hold $i_\tau$ constant at the resonance point, and solve for $i_1$ across the region $z=0$ to $L_z$ which includes the resonance. In our usual notation, \begin{equation} i_{1L} = (i_{1R}+i_\tau)e^{G_l}-i_\tau. \end{equation} $G_l\equiv \int_0^{L_z} dz\, \gamma$ is the \textsc{newlip}{} linear gain. For $G_l\ll 1$, $i_{1L}=i_{1R}(1+G_l)+i_\tau G_l$, and emission due to the boundary source dominates over TS. In the opposite limit, \begin{equation} i_{1L} = (i_{1R}+i_\tau)e^{G_l}, \qquad e^{G_l}\gg1. \end{equation} TS therefore gives rise to an effective boundary source $i_\tau$ (for a narrow resonance). In this sense, it does not significantly alter the shape of the gain spectrum ($i_\tau$ varies slowly with $\omega_1$). However, it \textit{does} lead to a difference in the absolute magnitude of the scattered spectrum, as embodied in an ``absolutely-calibrated'' gain like $G_d$. As an illustration, let us take $i_{1R}=\ensuremath{i_1^\mathrm{OT}}$, the optically-thick bremsstrahlung result of Eq.\ (\ref{eq:ot}), for simplicity evaluated at the resonance point in the Jeans limit $\hbar\omega_1\ll T_e$. Moreover, we set $T_i=T_e$ so that $i_\tau$ assumes the simple form of Eq.\ (\ref{eq:itauTeTi}). The effective seed is then \begin{equation} \label{eq:i1rit} i_{1R}+i_\tau \rightarrow \ensuremath{i_1^\mathrm{OT}}\left( 1 + {\Omega_1\over\Omega_1^v}\psi f\eta_0\eta_1{\omega_0\over\omega_1}{\omega_0\over\omega_2'} \right). \end{equation} The second term on the right ($=i_\tau/\ensuremath{i_1^\mathrm{OT}}$) is typically $\lesssim10$ for SRS: for our SRS benchmark, $i_\tau/\ensuremath{i_1^\mathrm{OT}}\approx3$. But, it can be quite large for SBS since $\omega_0\gg\omega_2'$ (for our SBS benchmark, $i_\tau/\ensuremath{i_1^\mathrm{OT}}\approx400$). A similar result is found in Ref.\ \cite{berger-srsnoise-pofb-1989}. The authors explain this on the thermodynamic ground that bremsstrahlung and \v{C}erenkov emission (which produces TS) generate equal light- and plasma-wave action, so the light-wave energy dominates by the frequency ratio. This manifests itself in the $\omega_0/\omega_2'$ factor in Eq.\ (\ref{eq:i1rit}), which is much larger for SBS. \section{Simulation of SBS experiments} \label{s:omsbs} Experiments have been conducted recently at the OMEGA laser to study LPI in conditions similar to those anticipated at NIF \cite{froula-omega-pop-2007}. These shots use a gas-filled hohlraum, and a set of ``heater'' beams to pre-form the plasma environment. An ``interaction'' beam is propagated down the hohlraum axis after being focused through a continuous phase plate (CPP) \cite{dixit-cpp-optlet-1996} with an f/6.7 lens to a vacuum best focus of 150 $\mu$m. The plasma conditions along the interaction beam path have been measured using TS \cite{froula-thomson-pop-2006}, validating 2-dimensional \textsc{hydra}{} \cite{marinak_hydra_pop_2001} hydrodynamic simulations that show, 700 ps after the rise of the heater beams, a uniform 1.5-mm plasma with an electron temperature of $\approx$2.7 keV \cite{meezan-lpi-pop-2007}. Figure \ref{f:omch}(a) displays the instantaneous SBS reflectivity increasing exponentially with the interaction beam intensity 700 ps after the rise of the heater beams. These experiments employed a 1 atmosphere gas-fill with 30\% CH$_4$ and 70\% C$_3$H$_8$ to produce an electron density along the interaction beam path of 0.06$n_{c0}$. Three-dimensional \ftd{} simulations agree well with the experiments \cite{divol-aps07-pop-2008}. Unlike the “plane-wave” simulations discussed in Sec.\ \ref{s:benchsrs}, these simulations include the full speckle physics. The \textsc{deplete}{} results (blue solid curve) fall well below the experimental data in the regime where pump depletion does not play a significant role ($I_0 \lesssim 2$ PW/cm$^2$). This indicates that speckles are enhancing the SBS. One way to approximate the speckle enhancement is to consider how much the coupling increases for the completely phase-conjugated mode \cite{zeldovich-phconj}. This mode has a transverse intensity pattern perfectly correlated with that of the pump, over several axial ranks of speckles, and therefore enhances the coupling coefficient $\Gamma_1$ \cite{divol-phaseconj-dpp-2005}. For an RPP-smoothed beam with intensity distribution $\sim e^{-I/I_c}$, this effectively doubles $\Gamma_1$. This should provide an upper bound on the reflectivity so long as the gain per speckle is $\lesssim 1$. If this is not the case, the gain in a speckled pump suffers a mathematical divergence (mitigated by pump depletion) as described in Ref.\ \cite{rose-div-prl-1994}. Our phase-conjugate considerations would then not apply. The blue dashed curve in Fig.\ \ref{f:omch} shows the \textsc{deplete}{} results with twice the nominal coupling. The $2\times\Gamma_1$ curve is always above the experimental reflectivities. The threshold intensity for which SBS equals 5\% is 1.8 PW/cm$^2$ and 0.9 PW/cm$^2$ for \textsc{deplete}{} with the nominal and twice-nominal coupling, respectively, while the experimental threshold is $\approx$1.5 PW/cm$^2$. Comparison of \textsc{deplete}{} and \ftd{} is displayed in Fig.\ \ref{f:omch}(b). These calculations were performed using plasma conditions from a \textsc{hydra}{} simulation, for a configuration similar to that of Fig.\ \ref{f:omch}(a), but with a higher heater-beam energy. The resulting conditions are similar, except the electron temperature is higher (about 3.3 keV). The \textsc{deplete}{} reflectivity with the nominal coupling (solid blue curve) lies below the \ftd{} results for the two intermediate values of $I_0$. This demonstrates speckle effects enhance the \ftd{} reflectivity for moderate $I_0$. The \textsc{deplete}{} results for $2\times\Gamma_1$ (dashed blue curve) are always above the \ftd{} results. Preliminary analyses with \textsc{deplete}{} and \ftd{} of OMEGA experiments designed to study ion Landau damping in SBS \cite{neumayer-sbs-prl-2008} also show a significant enhancement due to speckles. \begin{figure} \includegraphics[width=2.4in]{om_ch_exper.pdf} \\ \includegraphics[width=2.4in]{om_ch_f3d.pdf} \caption{(Color online.){} (a) SBS reflectivity for OMEGA experiments with CH gas fill and $T_e\approx2.7$ keV (described in text). Black circles are measured values, the blue solid curve is \textsc{deplete}{} calculations with the nominal coupling $\Gamma_1$, and the blue dashed curve is \textsc{deplete}{} calculations with $2\times\Gamma_1$. (b) \textsc{deplete}{} and \ftd{} SBS reflectivities for a similar configuration but $T_e\approx3.3$ keV. Black crosses are \ftd{} simulations, and the blue curves are the \textsc{deplete}{} results as in (a).} \label{f:omch} \end{figure} \section{Analysis of NIF ignition design} \label{s:nif} In this section, we exercise \textsc{deplete}{} on an actual NIF indirect-drive ignition target design. The target was designed using the hydrodynamic code \textsc{lasnex}} \newcommand{\slip}{\textsc{slip}{} \cite{zimmerman-lasnex-cppcf-1975}. For more details about the design see Ref.\ \cite{callahan-ifsa07}; LPI analysis for this and similar ignition targets, including massively-parallel, 3D \ftd{} simulations, can be found in Ref.\ \cite{hinkel-aps07-pop-2008}. The design utilizes all 192 NIF beams (at 351 nm ``blue'' light), which deliver 1.3 MJ of laser energy. We analyze LPI along the 30$^\circ$ cone of beams (one of the two ``inner'' cones). The pulse shape for one quad (a bundle of four beams), expressed as nominal intensity at best focus, is shown in Fig.\ \ref{f:nifIfoc}, and reaches a maximum of 0.33 PW/cm$^2$. The speckle pattern for a quad approximately corresponds to an F-number of $F=8$, which we use for \textsc{deplete}{}'s noise sources (but each beam individually has $F=20$ optics). The focal spot is elliptical with semi-axis lengths of 693, 968 $\mathrm{\mu m}$. The peak temperature of the radiation drive is 285 eV. The materials are as follows: the capsule ablator is Be, a plastic (CH) liner surrounds the laser entrance hole, the hohlraum wall is Au-U with a thin outer layer of 80\% Au-20\% B (atomic ratio), and the initial fill gas is 80\% H-20\% He. The lower-Z components are included in the last two mixtures to reduce SBS by increasing the ion Landau damping of the acoustic wave. \begin{figure} \includegraphics[width=2.25in]{nif_ifoc.pdf} \caption{Nominal intensity at best focus for 285 eV NIF ignition design (``NIF example''), found by dividing the laser power per quad by the focal spot size. The peak intensity corresponds to 6.9 TW/quad.} \label{f:nifIfoc} \end{figure} \begin{figure} \includegraphics[width=3in]{nif_mat.pdf} \caption{(Color online.){} Materials and laser beam cones for NIF example.} \label{f:nifmat} \end{figure} We performed \textsc{deplete}{} calculations, with both SRS and SBS, at several times and over 381 ray paths for each time. One must take an appropriate ``average'' over the rays to characterize the LPI on a cone. Regarding \textsc{newlip}{} gains, this has led to several approaches. These include averaging the gain, finding the maximum gain, or averaging $e^{G_l}$. This last method stems from assuming there is no pump depletion and noise sources are independent of scattered frequency; in this limit, the reflectivity should be roughly proportional to $e^{G_l}$. However, this averaging, and a fortiori taking the maximum, can be dominated by gains that are larger than physically allowed by pump depletion or other nonlinearity. One can attempt to include pump depletion via a Tang formula for $G_l$ at each $\omega_1$ \cite{tang-sbs-jap-1966}. \textsc{deplete}{} allows for more physical ray-averaging schemes. To the extent the transverse intensity pattern of a cone is uniform, each ray represents the same incident laser power. Averaging \textsc{deplete}'s ray reflectivities then measures the fraction of incident power that gets reflected. Pump depletion is of course included, which limits backscatter along high-gain rays in a physical way. The reflectivities and scattered spectra plotted here are simple averages over the rays. \begin{figure} \includegraphics[width=2.4in]{nif_refR.pdf} \\ \includegraphics[width=2.4in]{nif_refB.pdf} \caption{(Color online.){} \textsc{deplete}{} SRS and SBS ray-averaged reflectivities $I_{1L}$ for NIF example. Solid lines are the nominal case (re-absorption and $\Gamma_1$ unscaled), dashed lines are the nominal $\Gamma_1$ but no re-absorption of scattered light ($\kappa_1=0$), and dotted lines are $2\times\Gamma_1$ with re-absorption.} \label{f:nifref} \end{figure} \begin{figure} \includegraphics*[width=2.75in]{nif_specr.pdf} \caption{(Color online.){} SRS streaked spectrum $i_{1L}$ for NIF example, nominal case ($\kappa_1\neq0$, $1\times\Gamma_1$).} \label{f:nifspecr} \end{figure} \begin{figure} \includegraphics*[width=2.75in]{nif_specb.pdf} \caption{(Color online.){} SBS streaked spectrum for NIF example, nominal case ($\kappa_1\neq0$, $1\times\Gamma_1$). The white-yellow streak from 5-8 \AA{} occurs in the Be ablator, while the weaker feature from 12-15 \AA{} occurs in the gas fill.} \label{f:nifspecb} \end{figure} The reflectivities for several times near peak laser power, for the 30$^\circ$ cone, are shown in Fig.\ \ref{f:nifref}. The results for three different cases are presented. First, the solid lines give the reflectivities computed with the unmodified \textsc{deplete}{} equations. To quantify the role of re-absorption of scattered light in the target, we re-ran \textsc{deplete}{} with $\kappa_1=0$. This leads to the dashed lines. Finally, to bound the enhancement due to speckles, we plot the results when $\Gamma_1$ is doubled (and $\kappa_1\neq0$) as the dotted lines. \begin{figure} \includegraphics*[width=2.75in]{nif_isrs.pdf} \caption{(Color online.){} \textsc{deplete}{} SRS spectrum at time 13.75 ns for NIF example, smoothed over $\approx1$ nm. The black solid and red dashed lines are computed with ($\kappa_1\neq0$) and without ($\kappa_1=0$) re-absorption of scattered light, respectively.} \label{f:nifisrs} \end{figure} The spectra of escaping SRS and SBS light (averaged over rays) are shown in Fig.\ \ref{f:nifspecr}-\ref{f:nifspecb}. The SBS feature at a wavelength shift of 5-8 \AA{} comes from the Be ablator blowoff. A much weaker feature appears from 12-13 ns at 12-15 \AA{}, and occurs in the gas fill. The SRS spectrum is more irregular, showing two main features separated by $\approx$20 nm that move to higher $\lambda_1$ as time increases. In addition, there are narrow features at higher $\lambda_1$ that originate near the hohlraum wall; these would be reduced in a ray-averaged gain, since the exact $\lambda_1$ active for each ray depends sensitively on conditions near the wall and therefore varies from ray to ray. Re-absorption strongly suppresses these high-$\lambda_1$ spikes, as is seen in the SRS spectra with and without re-absorption at $t=$ 13.75 ns in Fig.\ \ref{f:nifisrs}. Collisional plasma-wave damping, currently not in \textsc{deplete}{}, may reduce the high-$\lambda_1$ scattering (the Landau damping of the low-$k_2\lambda_{De}$ plasma waves is negligible). \begin{figure} \includegraphics[width=2.4in]{nif_trnI0.pdf} \\ \includegraphics[width=2.4in]{nif_trnI1.pdf} \caption{(Color online.){} (a) Laser transmission for NIF example at 12.5 ns (peak power): black solid curve is the nominal \textsc{deplete}{} solution with pump depletion, red dashed curve is with just inverse-bremsstrahlung absorption, and black dotted curve is the \textsc{deplete}{} solution with $2\times\Gamma_1$. (b) SBS (blue) and SRS (red) scattered intensities for the nominal \textsc{deplete}{} solution. Calculation of intensity at a given $n_e$ is described in text.} \label{f:niftrn} \end{figure} Besides backscatter, \textsc{deplete}{} also provides the pump intensity $I_0(z)$ along each ray. This indicates how much laser energy is transmitted to a given location, which is a crucial aspect of a whether LPI degrades target performance. In cases where the backscattered light undergoes significant absorption as it propagates out of the target (as happens to SRS for the design analyzed here), the measured reflectivity can understate the level of LPI. The laser transmission can reveal this fact. Figure \ref{f:niftrn}(a) presents $I_0$, averaged over all the rays, at a given $n_e$. This is a 1D presentation of how much energy reaches a given density, although in the full 3D geometry different rays reach the same $n_e$ at different locations. $I_0$ with just pump absorption, as well as the \textsc{deplete}{} solutions with pump depletion for the nominal case and $2\times\Gamma_1$, are shown. Pump depletion is barely discernible in the nominal case, but is significant in the $2\times\Gamma_1$ case. For instance, in the latter case $I_0$ at $n_e/n_{c0}=0.2$ is only 60\% of its absorption-only value. The wavelength-integrated SRS and SBS $I_1$ are shown in Fig.\ \ref{f:niftrn}(b), and the scattered spectra vs.\ $n_e$ are shown in Figs.\ \ref{f:nifiofsr}-\ref{f:nifiofsb}. SRS in particular develops at several different densities, corresponding to different wavelengths, as can be seen in Figs.\ \ref{f:nifisrs} and \ref{f:nifiofsr}. \begin{figure} \includegraphics*[width=2.75in]{nif_iofsr.pdf} \caption{(Color online.){} SRS spectral density $i_1$ vs.\ $n_e/n_{c0}$ and $\lambda_1$, in decibels, at 12.5 ns (peak power), for NIF example.} \label{f:nifiofsr} \end{figure} \begin{figure} \includegraphics*[width=2.75in]{nif_iofsb.pdf} \caption{(Color online.){} SBS spectral density $i_1$ vs.\ $n_e/n_{c0}$ and $\lambda_1-\lambda_0$, in decibels, at 12.5 ns (peak power), for NIF example.} \label{f:nifiofsb} \end{figure} \section{Conclusions and future prospects} \label{s:conc} We have derived a 1D, steady-state, kinetic model for Brillouin and Raman backscatter, that includes pump depletion, bremsstrahlung damping and fluctuations, and Thomson scattering. This model is implemented by the code \textsc{deplete}{}, which we have presented as well. This work extends linear gain calculations, by including more physics while retaining its low computational cost. In particular, \textsc{deplete}{} provides the scattered-light spectrum and intensity developing from physical noise, which can be compared against more sophisticated codes and experiments. The transmitted pump laser along the profile is also found, which is important for assessing an ICF target design, especially when re-absorption of scattered light reduces the escaping backscatter from its internal level. We presented benchmarks of \textsc{deplete}{} on contrived, linear profiles, as well as analysis of OMEGA experiments and a NIF ignition design. The benchmarks reveal the deficiencies of linear gain, namely the neglect of TS, pump depletion, and re-absorption. Comparisons with \ftd{} provide a cross-validation of the two codes in a regime where they should agree. The OMEGA SBS experimental data, as well as \ftd{} simulations of these shots, show much more reflectivity than \textsc{deplete}{} gives, for intensities where pump depletion is weak. This enhancement is due to speckle effects. We showed an upper bound on this enhancement is given by doubling the \textsc{deplete}{} coupling coefficient $\Gamma_1$, which comes from considering the phase-conjugated mode in an RPP-smoothed beam. The ignition design analysis gives reasonably low backscatter levels for the nominal laser intensity and including re-absorption, with SRS dominating SBS. However, if re-absorption is neglected, or especially if $\Gamma_1$ is doubled, the backscatter appears more worrisome. The laser transmission supports these conclusions. Ray-based gain calculations have been used for some time to model LPI experiments, and \textsc{deplete}{} can provide more detailed comparisons. An early application of gain to hohlraum targets is Ref.\ \cite{glenzer-smoothing-pop-2001}, where hohlraums filled with CH gas were driven by laser beams with and without PS and SSD. Without SSD, reasonable agreement was found between measurments and the time-dependent SBS gain spectrum. However, there was a large difference in peak SRS wavelength between measurements and the gain spectrum, which may be due to laser filamentation changing the location of peak SRS growth. Several future directions exist for \textsc{deplete}. One is to include an ``independent speckle'' model for gain enhancement, where one solves the \textsc{deplete}{} equations over a speckle length for a distribution of pump intensities and then re-distributes the power. This would not describe correlations among axial ranks of speckles, caused e.g.\ by phase conjugation. \textsc{deplete}{} also enables some new diagnostics and applications. The pump and scattered intensities found by \textsc{deplete}{} can be used to compute the local material heating rate due to absorption. This could be incorporated into a hydrodynamic code, thereby coupling LPI to target evolution in a self-consistent, if simplified, way. In addition, the plasma-wave amplitudes found by \textsc{deplete}{} can be compared against thresholds for various nonlinearities to assess their relevance, and may allow estimation of hot electron production by SRS. Despite its promise, there are limits inherent to any 1D or ray-based approach, stemming from 3D wave optics (e.g. diffraction, speckles, filamentation, and beam bending). A 3D paraxial code called \slip{} \cite{froula-lengthlim-prl-2008}, which like \textsc{deplete}{} operates in steady state and uses kinetic coefficients, is being developed. This model is in some sense intermediate between \textsc{deplete}{} and \ftd. 1D codes like \textsc{deplete}{} still have a valuable role. They can analyze hundreds of rays, using hundreds of scattered wavelengths, in $\sim$ minutes, thus allowing designs to be rapidly analyzed and compared. The resulting time-dependent spectra allow for contact with experimental diagnostics, and are frequently needed, for example, to choose the carrier $k$ and $\omega$ for \ftd. Laser-plasma interactions have proven to be a very challenging area of plasma physics, owing to the variety of relevant physics and extreme range of scales involved. This has led to an equally extreme range of modeling tools, from 1D gain estimates to 3D kinetic simulations. By fully exploiting these tools, each with their uses and limitations, a more complete picture is emerging. \begin{acknowledgments} We gratefully recognize A.\ B.\ Langdon, R.\ L.\ Berger, C.\ H.\ Still, and L.\ Divol for helpful discussions and support. This work was supported by US Dept.\ of Energy Contract DE-AC52-07NA27344. \end{acknowledgments}
1803.09835
\section*{APPENDIX} \setcounter{section}{0} \setcounter{subsection}{0} \def\Alph{section}{\Alph{section}} \section{Comparison to Alternatives} \label{appendix:alternate} \subsection{Exact Similarity Search Algorithms} \label{appendix:exact} In this subsection, we investigate the performance and accuracy tradeoff between using MinHash LSH and exact algorithms for similarity search. We focus the comparison on set similarity joins, a line of exact join algorithms that identifies all pairs of sets above a similarity threshold from two collections of sets~\cite{setsimilarity}. State-of-the-art set similarity joins avoid exhaustively computing all pairs of set similarities via a filter-verification approach, such that only ``promising" candidates that survive the filtering and verification are examined for the final join. We report single-core query time of our MinHash LSH implementation and four state-of-the-art algorithms for set similarity joins: PPJoin~\cite{ppjoin}, GroupJoin~\cite{groupjoin}, AllPairs~\cite{allpairs} and AdaptJoin~\cite{adaptjoin}. For the set similarity joins, we use an open-source implementation (C++) from a recent benchmark paper, which is reported to be faster than the original implementations on almost all data points tested~\cite{setsimilarity}. We use a set of fingerprints generated from 20 hours of continuous time series data, which includes 74,795 input fingerprints with dimension 2048 and 10\% non-zero entries. For set similarity joins, we transform each binary fingerprint into a set of integer tokens of the non-zero entries, with the tokens chosen such that larger integer tokens are more frequent than smaller ones. We found that with a Jaccard similarity threshold of 0.5, the MinHash LSH incurs a 6.6\% false negative rate while enabling 63$\times$ to 200$\times$ speedups compared to set similarity join algorithms (Table~\ref{tab:setsim}). Among the four tested algorithms, AdaptJoin achieves the best query performance as a result of the small candidate set size enabled by its sophisticated filters. This is different from the benchmark paper's observation that expensive filters do not pay off and often lead to the slowest runtime~\cite{setsimilarity}. One important difference in our experiment is that the input fingerprints have a fixed number of non-zero entries; as a result, the corresponding input sets have equal length. Therefore, filtering and pruning techniques based on set length do not apply to our dataset. \subsection{Alternative LSH library} \begin{table} \small \center \begin{tabular}{r r r r} \toprule \textbf{False Negative (\%)} & \textbf{Query time (ms)} &\textbf{\# Hash Tables} & \textbf{\# Probes} \\ \midrule 6.7 & 0.87 & 85 & 85 \\ 6.5 & 2.4 & 50 & 120 \\ 0.54 & 2.4 & 50 & 400 \\ 0.36 & 2.0 & 200 & 200 \\ \bottomrule \end{tabular} \caption{Average query time and false negative rate under different FALCONN parameter settings. } \label{tab:falconn} \end{table} In this subsection, we compare the query performance of our similarity search to an alternative and more advanced open source LSH library. We were unable to find an existing high-performance implementation of LSH for Jaccard similarity, so we instead compare to FALCONN~\cite{falconnlib}, a popular library based on recent theoretical advances in LSH family for cosine similarity~\cite{falconn}. We exclude hash table construction time, and compare single-core query time of FALCONN and our MinHash LSH. We use the cross-polytope LSH family and tune the FALCONN parameters such that the resulting false negative rate is similar to that of the MinHash LSH (6.6\%). With ``vanilla" LSH, FALCONN achieves an average query time of 0.87ms (85 hash tables); with multi-probe LSH, FALCONN achieves an average query time of 2.4ms (50 hash tables and 120 probes). In comparison, our implementation has an average query time of 36 $\mu$s (4 hash functions, 100 hash tables), which is 24$\times$ and 65$\times$ faster than FALCONN with vanilla and multi-probe LSH. We report the runtime and false negative rate under additional FALCONN parameter settings in Table~\ref{tab:falconn}. Notably, in multi-probe LSH, adding additional probes reduces the false negative rate with very little runtime overhead. We consider using multi-probe LSH to further reduce the memory usage as a valuable area of future work. The performance difference reflects a mismatch between our sparse, binary input and FALCONN's target similarity metrics in cosine distance. Our results corroborate previous findings that MinHash outperforms SimHash on binary, sparse input data~\cite{minhashsimhash}. \subsection{Supervised Methods} \label{appendix:model} In this subsection, we report results from using supervised models for earthquake detection on the Diablo Canyon dataset. \minihead{Models} We focus the evaluation on two supervised models: WEASEL~\cite{weasel} and ConvNetQuake~\cite{convquake}. The former is a time series classification model that leverages statistics tests to select discriminative bag-of-pattern features on Fourier transforms; it outperforms the state-of-the-art non-ensemble classifiers in accuracy on the UCR time series benchmark. The latter is a convolutional neural network model with 8 strided convolution layers followed by a fully connected layer; it has successfully detected uncataloged earthquakes in Central Oklahoma. \minihead{Data} Same as the qualitative study in Section~\ref{sec:eq}, we focus on the area in the vicinity of the Diablo Canyon nuclear power plant in California. We use catalog earthquake events located in the region specified by Figure~\ref{fig:eqloc} as ground truth. We perform classification on the continuous ground motion data recorded at station PG.LMD, which has the largest number of high-quality recordings of catalog earthquake signals, and use additional data from station PG.DCD (station that remained active for the longest) for augmentation. Both stations record at 100Hz on 3 channels, capturing ground motion along three directions: EHZ channel for vertical, EHN channel for North-South and EHE channel for East-West motions. We use the vertical channel for WEASEL, and all three channels for ConvNetQuake. \minihead{Preprocessing and Augmentation} We extract 15-second long windows from the input data streams, which include windows containing earthquake events (positive examples) as well as windows containing only seismic noise (negative examples). This window length is consistent with that used for fingerprinting. We adopt the recommended data preprocessing and augmentation procedures for the two models. For WEASEL, we z-normalize each 15-second window of time series by subtracting the mean and dividing by the standard deviation. For ConvNetQuake, we divide the input into monthly streams and preprocess each stream by subtracting the mean and dividing by the absolute peak amplitude; we generate additional earthquake training examples by perturbing existing ones with zero-mean Gaussian noise with a standard deviation of 1.2. For both models, we further augment the earthquake training set with examples of catalog events recorded at an additional station. In order to prevent the models from overfitting to location of the earthquake event in the time window (e.g. a spike in the center of the window indicates earthquakes), we generate 6 samples for each catalog earthquake event with the location of the earthquake event shifted across the window. Specifically, we divide the 15-second time window into five equal-length regions, and generate one training example from each catalog event with the event located at a random position within each region; we generate an additional example with earthquake event located right in the center of the window. We report prediction accuracy averaged on samples located in each of the five regions for each event. We further analyze the impact of this augmentation in the results section below. \minihead{Train/Test Split} We create earthquake (positive) examples from the arrival times from the Northern California Seismic Network (NCSN) catalog~\cite{NCEDC}. Together, the catalog yields 3585 and 1388 catalog events for PG.LMD and PG.DCD, respectively, from 2007 to 2017. We select a random 10\% of the catalog events from PG.LMD as the test set, which includes 306 events from 8 months. We create a second test set containing 449 new earthquake events detected by our pipeline. Both test sets exhibit similar magnitude distribution, with majority of the events centered around magnitude 1. The training set includes the remaining catalog events at PG.LMD, as well as additional catalog events at PG.DCD. For negative examples, we randomly sample windows of seismic noise located between two catalog events at station PG.LMD. For training, we select 28,067 windows of noise for WEASEL, and 874,896 windows for ConvNetQuake; ConvNetQuake requires a much larger training set to prevent overfitting. For testing, we select 85,060 windows of noise from September, 2016 for both models. Finally, we generate 15-second non-overlapping windows from one month of continuous data (December, 2011) in the test set. We then select 100 random windows that the model classifies as earthquakes for false positive evaluation. \minihead{Results} We report the two models' best classification accuracy on test noise events (true negative rate), catalog events and FAST events in Table~\ref{tab:model}. The additional training data from PG.DCD boosts the classification accuracy for catalog and FAST events by up to 4.3\% and 3.2\%. If the model is only trained on samples with the earthquake event in the center of the window, the accuracy further degrades for over 6\% for WEASEL and over 20\% for ConvNetQuake, indicating that the models are not robust to translation. Overall, the 20\% gap in prediction accuracy between catalog events and FAST events suggests that models trained on the former do not generalized as well to the latter. Since the two test sets have similar magnitude distributions, the difference indicates that FAST events might be sufficiently different from the existing catalog events in training set that they are not detected effectively. In addition, we report the false positive rate evaluated on a random sample of 100 windows predicted as earthquakes by each model. The ground truth is obtained via our domain collaborators' manual inspection. WEASEL and ConvNetQuake exhibit a false positive rate of 90\% with a 95\% confidence interval of 5.88\%. In comparison, our end-to-end pipeline has only 8\% false positives. \minihead{Discussion} The fact that unsupervised method like our pipeline is able to find qualitatively different events than those in the existing catalog suggests that, for the earthquake detection problem, supervised and unsupervised methods are not mutually exclusive, but complementary to each other. In areas with rich historical data, supervised models showed promising potential for earthquake classification~\cite{convquake}. However, in cases where there are not enough events in the area of interest for training, we can still obtain meaningful detections via domain-informed unsupervised methods. In addition, unsupervised methods can serve as a means for label generation to improve the performance of supervised methods. \section{Additional Evaluations} \label{appendix:eval} This section contains additional evaluation results for the factor analysis in Section 8.1, the microbenchmarks of pipeline parameters in Section 8.3 as well as a figure illustrating the key idea behind locality-sensitive hashing. In Table~\ref{tab:factor}, we report the runtime and relative improvement of each optimization in the factor analysis in Section 8.1 on 1 year of time series data at station LTZ in the New Zealand dataset. \begin{table}[t] \scriptsize \begin{tabular}{l l l l l} \toprule \textbf{Stages} & \textbf{Fingerprint} & \textbf{Hash Gen} & \textbf{Search} & \textbf{Alignment} \\ \midrule Baseline & 9.58 & 4.28 & 149 & $>$1 mo (est.) \\ + occur filter & 9.58 & 4.28 & \textbf{30.9} (-79\%) & \textbf{16.02} \\ + \#n func & 9.58 & \textbf{5.63} (+32\%) & \textbf{3.35} (-89\%) & \textbf{18.42} (+15\%)\\ + locality Min-Max & 9.58 & \textbf{1.58} (-72\%) & 3.35 & 18.42 \\ + MAD sample & \textbf{4.98} (-48\%) & 1.58 & 3.35 & 18.42\\ + parallel (n=12) & \textbf{0.54} (-89\%) & \textbf{0.14} (-91\%) & \textbf{0.62} (-81\%) & \textbf{2.25} (-88\%)\\ \bottomrule \end{tabular} \caption{Factor analysis (runtime in hours, and relative improvement) of each optimization on 1 year of data from station LTZ. Each optimization contributes meaningfully to the speedup of the pipeline, and together, the optimizations enable an over 100$\times$ end-to-end speedup. } \label{tab:factor} \end{table} In Table~\ref{tab:madsample}, we report the relative speed up in MAD calculation time as well as the average overlap between the binary fingerprints generated using the sampled MAD and the original MAD as a metric for accuracy. The results illustrate that runtime reduces linearly with sampling rate, as expected. At lower rates, I/O begins to dominate MAD calculation runtime so the runtime improvements suffer from diminishing return. \begin{table}[t] \small \centering \begin{tabular}{r r r} \toprule \textbf{Sampling Rate} & \textbf{Accuracy (\%)} & \textbf{Speedup} \\ \midrule 0.001 & 94.9 & 350$\times$ \\ 0.01 & 98.7 & 99.8$\times$ \\ 0.1 & 99.5 & 10.5$\times$ \\ 0.5 & 99.7 & 2.2$\times$\\ 0.9 & 99.9 & 1.1$\times$\\ \bottomrule \end{tabular} \caption{Speedup and quality of different MAD sampling rate compared to no sampling on 1.3M fingerprints. Sampling enables a 100x speed up in MAD calculation with 98.7\% accuracy. Below 1\%, runtime improvements suffer from a diminishing return, as the IO begins to dominate the MAD calculation in runtime. } \label{tab:madsample} \end{table} Finally, Figure~\ref{fig:lsh} illustrates the key difference between LSH and general hashing: LSH hash functions preserve the distance of items in the high dimensional space, such that similar items are mapped to the same ``bucket" with high probability. \section{Bandpass filter guidelines} \label{appendix:bp} \begin{figure} \includegraphics[width=\linewidth]{figs/bp_examples.pdf} \caption{Example hour-long spectrograms from the three components of continuous seismic data, sampled at 100 Hz, at station MLD from the Diablo Canyon, California, data set: East-West (top row), North-South (center row), vertical (bottom row). For this station, a 4-10 Hz bandpass filter (dotted red rectangle) was applied before entering the processing pipeline. (a) Example of signals that should be excluded by the bandpass filter: a magnitude 8.3 teleseismic earthquake from the Sea of Okhotsk (bordered by Japan and Russia) starting at time ~1400 seconds, and persistent repeating noise throughout the entire hour at higher frequencies. (b) Example of a signal that should be included in the bandpass filter: a small local earthquake, with magnitude 1.7, at time ~1800 seconds.} \label{fig:bp} \end{figure} Figure~\ref{fig:bp} illustrates the process of selecting the bandpass filter on an example data set. The provided examples are hour-long spectrograms computed from the three components of continuous seismic data at station MLD from the Diablo Canyon, California, data set. Figure~\ref{fig:bp}a shows examples of signals that should be excluded by the bandpass filter. The high-amplitude signal starting at time ~1400 seconds is from a magnitude 8.3 teleseismic earthquake near Japan and Russia, with a long duration of over 10 minutes and predominantly lower frequency content (below 4 Hz). Generally, we are not interested in detecting large teleseismic earthquakes, because they are already detected and cataloged by global seismic networks (and shaking is usually felt near their origin). There is also persistent repeating noise throughout the entire hour at higher frequencies: it is especially prominent at 30-40 Hz on the East-West and North-South channels, but there are several bands of repeating noise, starting at a low of 12 Hz. We commonly observe repeating noise at lower frequencies (0-3 Hz) at most seismic stations, which is also seen in Figure~\ref{fig:bp}a after the teleseismic earthquake. It is essential to exclude as much of this persistent repeating noise from the bandpass filter as possible; otherwise, most of the fingerprints would match each other based on similar noise patterns, degrading both detection performance and runtime. Figure~\ref{fig:bp}b shows an example of a small (magnitude 1.7) local earthquake signals, at time ~1800 seconds, that we would like to detect, and therefore should be included by the bandpass filter. A small local earthquake is much shorter in duration, typically a few seconds long, and has higher frequency content, up to 10-20 Hz, compared to a teleseismic earthquake. We choose the widest possible bandpass filter to keep as much of the desired local earthquake signal as we can, while excluding frequencies with persistent repeating noise. Figure~\ref{fig:bp}a and b show spectrograms from two different days (2013-05-24 and 2015-07-20) at one example seismic station. In general, we recommend randomly sampling and examining short spectrogram sections throughout the entire duration of available continuous seismic data, and at each seismic station used for detection, as the amplitudes and frequencies of the repeating noise can vary significantly over time and at different stations. Anthropogenic (cultural) noise levels are often higher during the day than at night, and higher during the workweek than on the weekend. Sometimes it is difficult to select a frequency range that does not contain any persistent repeating noise; in this case, we advise excluding frequency bands with the highest amplitudes of repeating noise. \section{Hash Signature Generation} \label{appendix:hash} We present pseudocode for the optimized hash signature generation procedure in Algorithm~\ref{alg:minmax}. \begin{algorithm}[] \footnotesize \begin{algorithmic} \Function{single\_hash}{d, t, k, seed} \Comment{Get all hash mappings} \For {x $\in$ \{1, 2, ..., d\}} \For {y $\in$ \{1, 2, ..., t * k\}} \State hash[i][j] = \textproc{murmurhash}(i, seed + j) \EndFor \EndFor \Return hash \EndFunction \Function{minmax\_batch}{fp, hash} \Comment{Get hash signature for given batch} \For {x $\in$ \{1, 2, ..., fp.size()\}} \For {y $\in$ \{1, 2, ..., d\}} \If {fp[x][y] == 1} \For {i $\in$ \{1, 2, ..., t\}} \For {j $\in$ \{1, 2, ..., $\lceil\frac{k}{2}\rceil$ \} } \State minvals[i][j] = min(hash[y][i][j], minvals[i][j]) \State maxvals[i][j] = max(hash[y][i][j], maxvals[i][j]) \EndFor \EndFor \EndIf \EndFor \For {i $\in$ \{1, 2, ..., t\}} \State minmaxhash[x][i] = \textproc{hash\_combine}(minvals[i], maxvals[i]) \EndFor \EndFor \Return minmaxhash \EndFunction \\ \Function{gen\_signature}{fp, nprocs} \Comment{main function} \State hash = \textproc{single\_hash}(d, t, $\frac{k}{2}$, seed) \State fp\_partition = \textproc{partition}(fp, nprocs) \For {i $\in$ \{1,2, ..., nprocs\}} \textbf{in parallel} \State \textproc{minmax\_batch}(fp\_partition[i], hash) \EndFor \EndFunction \end{algorithmic} \caption{Optimized and parallelized Min-Max hash generation} \label{alg:minmax} \end{algorithm} \section{Conclusion} In this work, we reported on a novel application of LSH to large-scale seismological data, as well as the challenges and optimizations required to scale the system to over a decade of continuous sensor data. This experience in scaling LSH for large-scale earthquake detection illustrates both the potential and the challenge of applying core data analytics primitives to data-driven domain science on large datasets. On the one hand, LSH and, more generally, time series similarity search, is well-studied, with scores of algorithms for efficient implementation: by applying canonical MinHash-based LSH, our seismologist collaborators were able to meaningfully analyze more data than would have been feasible via manual inspection. On the other hand, the straightforward implementation of LSH in the original FAST detection pipeline failed to scale beyond a few months of data. The particulars of seismological data---such as frequency imbalance in the time series and repeated background noise---placed severe strain on an unmodified LSH implementation and on researchers attempting to understand the output. As a result, the seismological discoveries we have described in this paper would not have been possible without domain-specific optimizations to the detection pipeline. We believe that these results have important implications for researchers studying LSH (e.g., regarding the importance of skew resistance) and will continue to bear fruit as we scale the system to even more data and larger networks. \subsection{Scalable data analytics primitives} Scalability poses both opportunities and challenges for domain sciences. The ability to process and analyze large amounts of data can lead to qualitatively new discoveries for science. However, core data analytics primitives such as LSH often require involved tuning and optimizations to achieve the desired performance for the application domains. In our experience applying LSH to seismology data, we've seen that improving cache locality enables a 3x improvement in hash signature generation, and that tuning core LSH parameters can lead to up to an order of magnitude difference in runtime. For hashing-based methods like LSH, we've also found that small implementation details could significantly affect the algorithm's performance. We describe an example encountered in domain scientists' original implementation of LSH below. \minihead{Concatenate Hash Values} To combine $k$ 64 bit MinHash values to one 64 bit hash signature, the scientists kept the last 8 bits of each 64 bit MinHash value and concatenate them into one 64 bit value. Since the MinHash values are also not uniformly distributed over the last 8 bits, this truncation and concatenation lead to a worse skew of hash bucket sizes. By simpling changing the above concatenation routine to the boost library implementation \texttt{boost::hash\_combine()}, the similarity search runtime on a 31M fingerprint dataset reduced from 37.6 hours to 9.1 hours. Therefore, high-performance, open-source libraries for core data analytics primitives can be tremendously beneficial to the science community. \subsection{Use of domain knowledge} \label{sec:domain} We found that domain-specific, data-informed adaptations can significantly improve existing algorithm's performance to the application domain. As an example, we discuss the adaptations made to the fingerprinting algorithm to the seismology domain below. The fingerprint extraction procedure is developed based on Waveprint, an algorithm originally developed for audio signals~\cite{waveprint}. The key difference is that instead of selecting top magnitude coefficients in the Waveprint algorithm, the most anomalous coefficients are retained, which empirically improved earthquake detection results. Coefficients on the tails of the distribution rather than large in absolute magnitude are more representative of the key features of the original time series. In fact, on seismic data, a small subset of coefficients tend to produce a majority of top magnitude coefficients across all fingerprints. On an example dataset, over 50\% of the coefficients are only ever selected in less than 1\% of the fingerprints~\cite{FASTFingerPrint}. This increases the overall similarity of background (non-seismic) signals, which degrades the performance of similarity search. Figure~\ref{fig:mad} provides an example of fingerprints of two samples from the background time series generated by the original Waveprint and the modified fingerprinting algorithms; fingerprints of the latter show much smaller Jaccard similarity. Since the input time series is dominated by background, by suppressing the overall similarity between the "background" fingerprints, we facilitate the detection of weak earthquake signals. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figs/fp_compare.pdf} \caption{Pixels in red represent intersection of fingerprints for background samples 1 and 2. The modified fingerprint (MAD) is able to suppress the similarity between the two background samples.} \label{fig:mad} \end{figure} We also found that the ability to incorporate strong domain priors into custom analytics pipelines can not only improve the performance, but also the quality of the analytics. In the paper, we gave examples of how applying domain-specific filters based on knowledge of the frequency range and the frequency of occurrences of earthquakes can lead to both orders of magnitude speed ups in processing speed, but also improvement in the detection recall for catalog events. We also showed that by using knowledge of the fixed inter-event time between reoccurring earthquakes, we can effectively prioritize seismic discoveries in the large amount of similarity search output. For the Diablo Canyon analysis, we were able to reduce the output size from over 38 billion pairs of similar time series segments to around 5K candidate earthquake events, with a 8\% false positive rate. \end{comment} \section{Evaluation} \label{sec:eval} In this section, we perform both quantitative evaluation on performances of the detection pipeline, as well as qualitative analysis of the detection results. Our goal is to demonstrate that: \begin{enumerate}[topsep=.5em] \setlength\itemsep{0.2em} \item Each of our optimizations contributes meaningfully to the performance improvement; together, our optimizations enable an over 100$\times$ speed up in the end-to-end pipeline. \item Incorporating domain knowledge in the pipeline improves both the performance and the quality of the detection. \item The improved scalability enables scientific discoveries on two public datasets: we discovered 597 new earthquakes from a decade of seismic data near the Diablo Canyon nuclear power plant in California, as well as 6123 new earthquakes from a year of seismic data from New Zealand. \end{enumerate} \minihead{Dataset} We evaluate on two public datasets used in seismological analyses with our domain collaborators. The first dataset includes 1 year of 100Hz time series data (3.15 billion points per station) from 5 seismic stations (LTZ, MQZ, KHZ, THZ, OXZ) in New Zealand. We use the vertical channel (usually the least noisy) from each station~\cite{GeoNet}. The second dataset of interest includes 7 to 10 years of 100Hz time series data from 11 seismic stations and 27 total channels near the Diablo Canyon power plant in California~\cite{NCEDC}. \minihead{Experimental Setup} We report results from evaluating the pipeline on a server with 512GB of RAM and two 28-thread Intel Xeon E5-2690 v4 2.6GHz CPUs. Our test server has L1, L2, L3 cache sizes of 32K, 256K and 35840K. We report the runtime averages from multiple trials. \subsection{End-to-end Evaluation} \label{sec:e2e} In this subsection, we report the runtime breakdown of the baseline implementation of the pipeline, as well as the effects of applying different optimizations. \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{figs/factor.pdf} \vspace{-1em} \caption{Factor analysis of processing 1 month (left) and 1 year (right) of 100Hz data from LTZ station in the New Zealand dataset. We show that each of our optimization contributes to the performance improvements, and enabled an over 100$\times$ speed up end-to-end. } \label{fig:e2etime} \end{figure*} To evaluate how our optimizations scale with data size, we evaluate the end-to-end pipeline on 1 month and 1 year of time series data from station LTZ in the New Zealand dataset. We applied a bandpass filter of 3-20Hz on the original time series to exclude noisy low-frequency bands. For fingerprinting, we used a sliding window with length of 30 seconds and slide of 2 seconds, which results in 1.28M binary fingerprints for 1 month of time series data (15.7M for one year), each of dimension 8192; for similarity search, we use $6$ hash functions, and require a detection threshold of $5$ matches out of $100$ hash tables. We further investigate the effect of varying these parameters in the microbenchmarks in Section~\ref{eval:params}. Figure~\ref{fig:e2etime} shows the cumulative runtime after applying each optimization. Overall, our optimizations scale well with the size of the dataset, and enable an over 100$\times$ improvement in end-to-end processing time. We analyze each of these components in turn: First, we apply a 1\% occurrence filter (+ occur filter, Section~\ref{sec:noise}) during similarity search to exclude frequent fingerprint matches generated by repeating background noise. This enables a 2-5$\times$ improvement in similarity search runtime while reducing the output size by 10-50$\times$, reflected in the decrease in postprocessing time. Second, we further reduce the search time by increasing the number of hash functions to 8 and lowering the detection threshold to 2 (+ increase \#funcs, Section~\ref{sec:searchparam}). While this increases the hash signature generation and output size, it enables around 10$\times$ improvement in search time for both datasets. Third, we reduce the hash signature generation time by improving the cache locality and reducing the computation with Min-Max hash instead of MinHash (+ locality MinMax, Section~\ref{sec:hashgen}), which leads to a 3$\times$ speedup for both datasets. Fourth, we speed up fingerprinting by 2$\times$ by estimating MAD statistics with a 10\% sample (+ MAD sample, Section~\ref{sec:mad}). Finally, we enable parallelism and run the pipeline with 12 threads (Section~\ref{sec:mad},~\ref{sec:searchpart},~\ref{sec:networkimp}). As a result, we see an almost linear decrease in runtime in each part of the pipeline. Notably, due to the overall lack of data dependencies in this scientific pipeline, simple parallelization can already enable significant speedups. The improved scalability enables us to scale analytics from 3 months to over 10 years of data. We discuss qualitative detection results from both datasets in Section~\ref{sec:eq}. \subsection{Effect of domain-specific optimizations} \label{eval:domain} In this section, we investigate the effect of applying domain-specific optimizations to the pipeline. We demonstrate that incorporating domain knowledge could improve both performance and result quality of the detection pipeline. \minihead{Occurrence filter} We evaluate the effect of applying the occurrence filter during similarity search on the five stations from the New Zealand dataset. For this experiment, we use a partition size of 1 month as the duration for the occurrence threshold; a $>$1\% threshold indicates that a fingerprint matches over 1\% (10K) other fingerprints in the same month. We report the total percentage of filtered fingerprints under varying thresholds in Table~\ref{tab:missedevents}. We also evaluate the accuracy of the occurrence filter by comparing the timestamps of filtered fingerprints with the catalog of the arrival times of known earthquakes at each station. We report the false positive rate, or the number of filtered earthquakes over the total number of cataloged events, of the filter under varying thresholds. The results show that as the occurrence filter becomes stronger, the percentage of filtered fingerprints and the false positive rate both increase. For seismic stations suffering from correlated noise, the occurrence filter can effectively eliminate a significant amount of fingerprints from the similarity search. For station LTZ, a $>$1\% threshold filters out up to 30\% of the total fingerprints without any false positives, which results in a 4$\times$ improvement in runtime. For other stations, the occurrence filter has little influence on the results. This is expected since these stations do not have repeating noise signals present at station LTZ (Figure~\ref{fig:ltz_noise}). In practice, correlated noise is rather prevalent in seismic data. In the Diablo Canyon dataset for example, we applied the occurrence filter on three out of the eleven seismic stations in order for the similarity search to finish in a tractable time. \begin{table*} \centering \small \ra{1.1} \begin{tabular}{@{}rrrrcrrrcrrrcrrrcrrr@{}} \hlineB{1.5} & \multicolumn{3}{c}{\textbf{LTZ} (1548 events)} & \phantom{a}& \multicolumn{3}{c}{\textbf{MQZ} (1544 events)} & \phantom{a} & \multicolumn{3}{c}{\textbf{KHZ} (1542 events)} & \phantom{a} & \multicolumn{3}{c}{\textbf{THZ} (1352 events)}& \phantom{a} & \multicolumn{3}{c}{\textbf{OXZ} (1248 events)}\\ \cmidrule{2-4} \cmidrule{6-8} \cmidrule{10-12} \cmidrule{14-16} \cmidrule{18-20} \textbf{Thresh} & FP & Filtered & Time && FP & Filtered& Time&& FP& Filtered & Time && FP& Filtered & Time && FP& Filtered& Time\\ \hline $>$5.0\% & 0 & 0.09 & 149.3 && 0 & 0 & 2.8 && 0& 0 & 2.2 && 0& 0 & 2.4 && 0& 0 & 2.6\\ $>$1.0\% & 0 & 30.1 & 31.0 && 0 & 0 & 2.7 && 0& 0 & 2.3 && 0& 0 & 2.3 && 0& 0 & 2.6\\ $>$0.5\% & 0 & 31.2 & 32.1 && 0 & 0.09 & 2.8 && 0& 0 & 2.4 && 0& 0 & 2.4 && 0.08 & 0.08 & 2.7\\ $>$0.1\% & 0 & 32.1 & 28.6 && 0.07 & 0.3 & 2.7 && 0 & 0.03 & 2.4 && 0& 0.02 & 2.3 && 0.08& 0.17 & 2.6\\ \hlineB{1.5} \end{tabular} \vspace{0.5em} \caption{The table shows that the percentage of fingerprints filtered (Filtered) and the false positive rate (FP) both increase as the occurrence filter becomes stronger (from filtering matches above 5.0\% to above 0.1\%). The runtime (in hours) measures similarity search time. } \label{tab:missedevents} \end{table*} \minihead{Bandpass filter} We compare similarity search on the same dataset (Nyquist frequency 50Hz) before and after applying bandpass filters. The first bandpass filter (bp: 1-20Hz) is selected as most seismic signals are under 20Hz; the second (bp: 3-20Hz) is selected after manually looking at samples spectrograms of the dataset and excluding noisy low frequencies. Figure~\ref{fig:bpfilter} reports the similarity search runtime for fingerprints generated with different bandpass filters. Overall, similarity search suffers from additional matches generated from the noisy frequency bands outside the interests of seismology. For example, at station OXZ, removing the bandpass filter leads to a 16$\times$ slow down in runtime and a 209$\times$ increase in output size. We compare detection recall on 8811 catalog earthquake events for different bandpass filters. The recall for the unfiltered data (0-50Hz), the 1-20Hz and 3-20Hz bandpass filters are 20.3\%, 23.7\%, 45.2\%, respectively. The overall low recall is expected, as we only used 4 (out of over 50) stations in the seismic network that contributes to the generation of catalog events. Empirically, a narrow, domain-informed bandpass filter focuses the comparison of fingerprint similarity on frequencies that are characteristics of seismic events, leading to improved similarity between earthquake events and therefore increased recall. We provide guidelines for setting the bandpass filter in Appendix~\ref{appendix:bp}. \begin{figure} \centering \includegraphics[width=\linewidth]{figs/bp_filter.pdf} \vspace{-1.1em} \caption{LSH runtime under different band pass filters. Matches of noise in the non-seismic frequency bands can lead to a 16$\times$ increase in runtime and over 200 $\times$ increase in output size for unfiltered time series.} \label{fig:bpfilter} \end{figure} \subsection{Effect of pipeline parameters} \label{eval:params} In this section, we evaluate the effect of the space/quality and time trade-offs for core pipeline parameters. \minihead{MAD sampling rate} We evaluate the speed and quality trade-off for calculating the median and MAD of the wavelet coefficients for fingerprints via sampling. We measure the runtime and accuracy on the 1 month dataset in Section~\ref{sec:e2e} (1.3M fingerprints) under varying sampling rates. Overall, runtime and accuracy both decrease with sampling rate as expected. For example, a 10\% and 1\% sampling rate produce fingerprints with 99.7\% and 98.7\% accuracy, while enabling a near linear speedup of 10.5$\times$ and 99.8$\times$, respectively. Below 1\%, runtime improvements suffer from a diminishing return, as the IO begins to dominate the MAD calculation in runtime--on this dataset, a 0.1\% sampling rate only speeds up the MAD calculation by 350$\times$. We include additional results of this trade-off in the appendix. \minihead{LSH parameters} We report runtime of the similarity search under different LSH parameters in Figure~\ref{fig:lshparam}. As indicated in Figure~\ref{fig:prob}, the three sets of parameters that we evaluate yield near identical probability of detection given Jaccard similarity of two fingerprints. However, by increasing the number of hash functions and thereby increasing the selectivity of hash signatures, we decrease the average number of lookups per query by over 10x. This results in around 10x improvement in similarity search time. \begin{figure} \centering \includegraphics[width=\linewidth]{figs/numhash.pdf} \vspace{-1.5em} \caption{Effect of LSH parameters on similarity search runtime and average query lookups. Increasing the number of hash functions significantly decreases average number of lookups per query, which results in an up to 10$\times$ improvement in runtime. } \label{fig:lshparam} \end{figure} \minihead{Number of partitions} We report the runtime and memory usage of the similarity search with varying number of partitions in Figure~\ref{fig:partition}. As the number of partitions increases, the runtime increases slightly due to the overhead of initialization and deletion of hash tables. In contrast, memory usage decreases as we only need to keep a subset of the hash signatures in the hash table at any time. Overall, by increasing the number of partitions from 1 to 8, we are able to decrease the memory usage by over 60\% while incurring less than 20\% runtime overhead. This allows us to run LSH on larger datasets with the same amount of memory. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figs/partition_eval.pdf} \vspace{-0.5em} \caption{Runtime and memory usage for similarity search under a varying number of partitions. By increasing the number of search partitions, we are able to decrease the memory usage by over 60\% while incurring less than 20\% runtime overhead.} \label{fig:partition} \end{figure} \minihead{Parallelism} Finally, to quantify the speedups from parallelism, we report the runtime of LSH hash signature generation and similarity search using a varying number of threads. For hash signature generation, we report time taken to generate hash mappings as well as the time taken to compute Min-Max hash for each fingerprint. For similarity search, we fix the input hash signatures and vary the number of threads assigned during the search. We show the runtime averaged from four seismic stations in Figure~\ref{fig:parallel}. Overall, hash signature generation scales almost perfectly (linearly) up to 32 threads, while similarity search scales slightly worse; both experience significant performance degradation running with all available threads. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figs/parallelism.pdf} \vspace{-0.5em} \caption{Hash generation scales near linearly up to 32 threads. } \label{fig:parallel} \end{figure} \begin{figure*} \centering \includegraphics[width=\linewidth]{figs/diablo_detections_magnitude_vs_time.pdf} \vspace{-1.8em} \caption{The left axis shows origin times and magnitude of detected earthquakes, with the catalog events marked in blue and new events marked in red. The colored bands in the right axis represent the duration of data used for detection collected from 11 seismic stations and 27 total channels. Overall, we detected 3957 catalog earthquakes (diamond) as well as 597 new local earthquakes (circle) from this dataset.} \label{fig:neweq} \end{figure*} \subsection{Comparison with Alternatives} \label{sec:falconn} In this section, we evaluate against alternative similarity search algorithms and supervised methods. We include additional experiment details in Appendix~\ref{appendix:alternate}. \minihead{Alternative Similarity Search Algorithms} We compare the single-core query performance of our MinHash LSH to 1) an alternative open source LSH library FALCONN~\cite{falconnlib} 2) four state-of-the-art set similarity join algorithms: PPJoin~\cite{ppjoin}, GroupJoin~\cite{groupjoin}, AllPairs~\cite{allpairs} and AdaptJoin~\cite{adaptjoin}. We use 74,795 fingerprints with dimension 2048 and 10\% non-zero entries, and a Jaccard similarity threshold of 0.5 for all libraries. Compared to exact algorithms like set similarity joins, approximate algorithms such as LSH incur a 6\% false negative rate. However, MinHash LSH enables a 24$\times$ to 65$\times$ speedup against FALCONN and 63$\times$ to 197$\times$ speedup against set similarity joins (Table~\ref{tab:setsim}). Characteristics of the input fingerprints contribute to the performance differences: the fixed number of non-zero entries in fingerprints makes pruning techniques in set similarity joins based on set length irrelevant; our results corroborate with previous findings that MinHash outperforms SimHash on binary, sparse input~\cite{minhashsimhash}. \begin{table} \small \center \begin{tabular}{r r r} \hlineB{1.5} \textbf{Algorithm} & \textbf{Average Query time} & \textbf{Speedup}\\ \hline MinHash LSH & 36 $\mu$s& -- \\ FALCONN vanilla LSH & .87ms & 24$\times$ \\ FALCONN multi-probe LSH & 2.4ms & 65$\times$\\ AdaptJoin~\cite{adaptjoin} & 2.3ms& 63$\times$ \\ AllPairs~\cite{allpairs} & 7.1ms& 197$\times$ \\ GroupJoin~\cite{groupjoin} & 5.7ms& 159$\times$ \\ PPJoin~\cite{ppjoin} & 5.5ms& 151$\times$ \\ \hlineB{1.5} \end{tabular} \vspace{0.5em} \caption{Single core per-datapoint query time for LSH and set similarity joins. MinHash LSH incurs a 6.6\% false negative rate while enabling up to 197$\times$ speedup. } \label{tab:setsim} \end{table} \minihead{Supervised Methods} We report results evaluating two supervised models: WEASEL~\cite{weasel} and ConvNetQuake~\cite{convquake} on the Diablo Canyon dataset. Both models were trained on labeled catalog events (3585 events from 2010 to 2017) and randomly sampled noise windows at station PG.LMD. We also augment the earthquake training examples by 1) adding earthquake examples from another station PG.DCD 2) perturbing existing events with white noise 3) shifting the location of the earthquake event in the window. Table~\ref{tab:model} reports test accuracy of the two models on a sample of 306 unseen catalog events and 449 new events detected by our pipeline (FAST events), as well as the false positive rate estimated from manual inspection of 100 random earthquake predictions. While supervised methods achieve high accuracy in classifying unseen catalog and noise events, they exhibit a high false positive rate (90$\pm$5.88\%) and miss 30-32\% of new earthquake events detected by our pipeline. The experiment suggests that unsupervised methods like our pipeline are able to detect qualitatively different events from the existing catalog, and that supervised methods are complements, rather than replacements, of unsupervised methods for earthquake detection. \begin{table} \small\center \begin{tabular}{ r r r} \hlineB{1.5} & \textbf{WEASEL~\cite{weasel}} & \textbf{ConvNetQuake~\cite{convquake}} \\ \hline Test Catalog Acc. (\%) & 90.8 & 90.6 \\ Test FAST Acc. (\%) & 68.0 & 70.5 \\ True Negative Rate (\%) & 98.6 & 92.2\\ False Positive Rate (\%) & 90.0$\pm$5.88 & 90.0$\pm$5.88 \\ \hlineB{1.5} \end{tabular} \vspace{0.5em} \caption{Supervised methods trained on catalog events exhibit high false positive rate and a 20\% accuracy gap between predictions on catalog and FAST detected events. } \label{tab:model} \end{table} \subsection{Qualitative Results} \label{sec:eq} We first report our findings in running the pipeline over a decade (06/2007 to 10/2017) of continuous seismic data from 11 seismic stations (27 total channels) near the Diablo Canyon nuclear power plant in central California. The chosen area is of special interest as there are many active faults near the power plant. Detecting additional small earthquakes in this region will allow seismologists to determine the size and shape of nearby fault structures, which can potentially inform seismic hazard estimates. We applied station-specific bandpass filters between 3 and 12 Hz to remove repeating background noise from the time series. In addition, we applied the occurrence filter on three out of the eleven seismic stations that experienced corrupted sensor measurements. The number of input binary fingerprints for each seismic channel ranges from 180 million to 337 million; the similarity search runtime ranges from 3 hours to 12 hours with 48 threads. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figs/location.pdf} \vspace{-0.5em} \caption{Overview of the location of detected catalog events (gray open circles) and new events (red diamonds). The pipeline was able to detect earthquakes close to the seismic network (boxed) as well as all over California.} \label{fig:eqloc} \end{figure} Among the 5048 detections above our detection threshold, 397 detections (about 8\%) were false positives, confirmed via visual inspection: 30 were duplicate earthquakes with a lower similarity, 18 were catalog quarry blasts, 5 were deep teleseismic earthquakes (large earthquakes from $>$1000 km away). There were also 62 non-seismic signals detected across the seismic network; we suspect that some of these waveforms are sonic booms. Overall, we were able to detect and locate 3957 catalog earthquakes, as well as 597 new local earthquakes. Figure~\ref{fig:neweq} shows an overview of the origin time of detected earthquakes, which is spread over the entire ten-year span. The detected events include both low-magnitude events near the seismic stations, as well as larger events that are farther away. Figure~\ref{fig:eqloc} visualizes the locations of both catalog events and newly detected earthquakes, and Figure~\ref{fig:zoomineqloc} zooms in on earthquakes in the vicinity of the power plant. Despite the low rate of local earthquake activity (535 total catalog events from 2007 to 2017 within the area shown in Figure~\ref{fig:zoomineqloc}), we were able to detect 355 new events that are between $-0.2$ and 2.4 in magnitude and located within the seismic network, where many active faults exist. We missed 261 catalog events, almost all of which originated from outside the network of our interest. Running the detection pipeline at scale enables scientists to discover earthquakes from unknown sources. These new detected events will be used to determine the details of active fault structures near the power plant. We are also actively working with our domain collaborators on additional analysis of the New Zealand dataset. The pipeline detected 11419 events, including 4916 catalog events, 355 teleseismic events, 6123 new local earthquakes and 25 false positives (noise waveforms) verified by the seismologists. We are preparing these results for publication in seismological venues, and expect to further improve the detection results by scaling up the analysis to more seismic stations over a longer duration of time. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figs/zoomin_location.pdf} \vspace{-0.5em} \caption{Zoom in view of locations of new detected earthquakes (red diamonds) and cataloged events (blue circles) near the seismic network (box in Figure~\ref{fig:eqloc}). The new local earthquakes contribute detailed information about the structure of faults.} \label{fig:zoomineqloc} \end{figure} \section{Introduction} Locality Sensitive Hashing (LSH)~\cite{lsh} is a well studied computational primitive for efficient nearest neighbor search in high-dimensional spaces. LSH hashes items into low-dimensional spaces such that similar items have a higher collision probability in the hash table. Successful LSH applications include entity resolution~\cite{lsher}, genome sequence comparison~\cite{lshgenome}, text and image search~\cite{lshtextsearch,lshimagesearch}, near duplicate detection~\cite{neardup,googlededup}, and video identification~\cite{lshvideo}. In this paper, we present an innovative use of LSH---and associated challenges at scale---in large-scale earthquake detection across seismic networks. Earthquake detection is particularly interesting in both its abundance of raw data and scarcity of labeled examples: First, seismic data is large. Earthquakes are monitored by seismic networks, which can contain thousands of seismometers that continuously measure ground motion and vibration. For example, Southern California alone has over 500 seismic stations, each collecting continuous ground motion measurements at 100Hz. As a result, this network alone has collected over ten trillion ($10^{13}$) data points in the form of time series in the past decade~\cite{caltech}. Second, despite large measurement volumes, only a small fraction of earthquake events are cataloged, or confirmed and hand-labeled by domain scientists. As earthquake magnitude (i.e., size) decreases, the frequency of earthquake events increases exponentially. Worldwide, major earthquakes (magnitude 7+) occur approximately once a month, while magnitude 2.0 and smaller earthquakes can occur several thousand times a day. At low magnitudes, it is increasingly difficult to detect earthquake signals because earthquake energy approaches the noise floor, and conventional seismological analyses can fail to disambiguate between signal and noise. Nevertheless, detecting these small earthquakes is important in uncovering unknown seismic sources~\cite{smalleq4,FASTlarge}, improving the understanding of earthquake mechanics~\cite{smalleq1,smalleq3}, and better predicting the occurrences of future events~\cite{smalleq2}. \begin{figure} \centering \includegraphics[width=\linewidth]{figs/fig1_p_s.pdf} \caption{Example of near identical waveforms between occurrences of the same earthquake two months apart, observed at three seismic stations in New Zealand. The stations experience increased ground motions upon the arrivals of seismic waves (e.g., P and S waves). This paper scales LSH to over 30 billion data points and discovers 597 and 6123 new earthquakes near the Diablo Canyon nuclear power plant in California and in New Zealand, respectively. } \label{fig:similar_waveforms} \vspace{-0.5em} \end{figure} To take advantage of the large volume of unlabeled raw measurement data, seismologists have developed an unsupervised, data-driven earthquake detection method, Fingerprint And Similarity Thresholding (FAST), based on waveform similarity~\cite{FAST}. Seismic sources repeatedly generate earthquakes over the course of days, months or even years, and these earthquakes show near identical waveforms when recorded at the same seismic station, regardless of the earthquake's magnitude~\cite{similarwaveform,similareq}. Figure~\ref{fig:similar_waveforms} illustrates this phenomenon by depicting a pair of reoccurring earthquakes that are two months apart, observed at three seismic stations in New Zealand. By applying LSH to identify similar waveforms from seismic data, seismologists were able to discover new, low-magnitude earthquakes without knowledge of prior earthquake events. Despite early successes, seismologists had difficulty scaling their LSH-based analysis beyond 3-month of time series data ($7.95\times10^8$ data points) at a single seismic station~\cite{FASTlarge}. The FAST implementation faces severe scalability challenges. Contrary to what LSH theory suggests, the actual LSH runtime in FAST grows near quadratically with the input size due to correlations in the seismic signals: in an initial performance benchmark, the similarity search took 5 CPU-days to process 3 months of data, and, with a 5$\times$ increase in dataset size, LSH query time increased by 30$\times$. In addition, station-specific repeated background noise leads to an overwhelming number of similar but non-earthquake time series matches, both crippling throughput and seismologists' ability to sift through the output, which can number in the hundreds of millions of events. Ultimately, these scalability bottlenecks prevented seismologists from making use of the decades of data at their disposal. In this paper, we show how systems, algorithms, and domain expertise can go hand-in-hand to deliver substantial scalability improvements for this seismological analysis. Via algorithmic design, optimization using domain knowledge, and data engineering, we scale the FAST workload to years of continuous data at multiple stations. In turn, this scalability has enabled new scientific discoveries, including previously unknown earthquakes near a nuclear reactor in San Luis Obispo, California, and in New Zealand. Specifically, we build a scalable end-to-end earthquake detection pipeline comprised of three main steps. First, the fingerprint extraction step encodes time-frequency features of the original time series into compact binary fingerprints that are more robust to small variations. To address the bottleneck caused by repeating non-seismic signals, we apply domain-specific filters based on the frequency bands and the frequency of occurrences of earthquakes. Second, the search step applies LSH on the binary fingerprints to identify all pairs of similar time series segments. We pinpoint high hash collision rates caused by physical correlations in the input data as a core culprit of LSH performance degradation and alleviate the impact of large buckets by increasing hash selectivity while keeping the detection threshold constant. Third, the alignment step significantly reduces the size of detection results and confirms seismic behavior by performing spatiotemporal correlation with nearby seismic stations in the network~\cite{networkpaper}. To scale this analysis, we leverage domain knowledge of the invariance of the time difference between a pair of earthquake events across all stations at which they are recorded. In summary, as an innovative systems and applications paper, this work makes several contributions: \begin{itemize}[noitemsep,topsep=.3em] \setlength\itemsep{0.1em} \item We report on a new application of LSH in seismology as well as a complete end-to-end data science pipeline, including non-trivial pre-processing and post-processing, that scales to a decade of continuous time series for earthquake detection. \item We present a case study for using domain knowledge to improve the accuracy and efficiency of the pipeline. We illustrate how applying seismological domain knowledge in each component of the pipeline is critical to scalability. \item We demonstrate that our optimizations enable a cumulative two order-of-magnitude speedup in the end-to-end detection pipeline. These quantitative improvements enable qualitative discoveries: we discovered 597 new earthquakes near the Diablo Canyon nuclear power plant in California and 6123 new earthquakes in New Zealand, allowing seismologists to determine the size and shape of nearby fault structures. \end{itemize} Beyond these contributions to a database audience, our solution is an open source tool, available for use by the broader scientific community. We have already run workshops for seismologists at Stanford~\cite{fastgithub} and believe that the pipeline can not only facilitate targeted seismic analysis but also contribute to the label generation for supervised methods in seismic data~\cite{convquake}. The rest of the paper proceeds as follows. We review background information about earthquake detection in Section 2 and discuss additional related work in Section 3. We give a brief overview of the end-to-end detection pipeline and key technical challenges in Section 4. Sections 5, 6 and 7 present details as well as optimizations in the fingerprint extraction, similarity search and the spatiotemporal alignment steps of the pipeline. We perform a detailed evaluation on both the quantitative performance improvements of our optimizations as well as qualitative results of new seismic findings in Section 8. In Section 9, we reflect on lessons learned and conclude. \section*{Acknowledgements} We thank the many members of the Stanford InfoLab for their valuable feedback on this work. This research was supported in part by affiliate members and other supporters of the Stanford DAWN project---Facebook, Google, Intel, Microsoft, NEC, SAP, Teradata, and VMware---as well as Toyota Research Institute, Keysight Technologies, Hitachi, Northrop Grumman, Amazon Web Services, Juniper Networks, NetApp, PG\&E, the Stanford Data Science Initiative, the Secure Internet of Things Project, and the NSF under grant EAR-1818579 and CAREER grant CNS-1651570. \bibliographystyle{abbrv} \Urlmuskip=0mu plus 1mu \section{Pipeline Overview} \label{sec:overview} In this section, we provide an overview of the three main steps of our end-to-end detection pipeline. We elaborate on each step---and our associated optimizations---in later sections, referenced inline. The input of the detection pipeline consists of continuous ground motion measurements in the form of time series, collected from multiple stations in the seismic network. The output is a list of potential earthquakes, specified in the form of timestamps when the seismic wave arrives at each station. From there, seismologists can compare with public earthquake catalogs to identify new events, and visually inspect the measurements to confirm seismic findings. Figure~\ref{fig:pipeline} illustrates the three major components of the end-to-end detection pipeline: fingerprint extraction, similarity search, and spatiotemporal alignment. For each input time series, or continuous ground motion measurements from a seismic channel, the algorithm slices the input into short windows of overlapping time series segments and encodes time-frequency features of each window into a binary fingerprint; the similarity of the fingerprints resembles that of the original waveforms (Section~\ref{sec:fp}). The algorithm then performs an all pairs similarity search via LSH on the binary fingerprints and identifies pairs of highly similar fingerprints (Section~\ref{sec:search}). Finally, like a traditional associator that maps earthquake detections at each station to a consistent seismic source, in the spatiotemporal alignment stage, the algorithm combines, filters and clusters the outputs from all seismic channels to generate a list of candidate earthquake detections with high confidence (Section~\ref{sec:network}). A na\"ive implementation of the pipeline imposes several scalability challenges. For example, we observed LSH performance degradation in our application caused by the non-uniformity and correlation in the binary fingerprints; the correlations induce undesired LSH hash collisions, which significantly increase the number of lookups per similarity search query (Section~\ref{sec:searchparam}). In addition, the similarity search does not distinguish seismic from non-seismic signals. In the presence of repeating background signals, similar noise waveforms could outnumber similar earthquake waveforms, leading to more than an order of magnitude slow down in runtime and increase in output size (Section~\ref{sec:noise}). As the input time series and the output of the similarity search becomes larger, the pipeline must adapt to data sizes that are too large to fit into main memory (Section~\ref{sec:searchpart},~\ref{sec:networkimp}). In this paper, we focus on single-machine, main-memory execution on commodity servers with multicore processors. We parallelize the pipeline within a given server but otherwise do not distribute the computation to multiple servers. In principle, the parallelization efforts extend to distributed execution. However, given the poor quadratic scalability of the unoptimized pipeline, distribution alone would not have been a viable option for scaling to desired data volume. As a result of the optimizations described in this paper, we are able to scale to a decade of data on a single node without requiring distribution. However, we view distributed execution as a valuable extension for future work. In the remaining sections of this paper, we describe the design decisions as well as performance optimizations for each pipeline component. Most of our optimizations focus on the all pairs similarity search, where the initial implementation exhibited near quadratic growth in runtime with the input size. We show in the evaluation that, these optimizations enable speedups of more than two orders of magnitude in the end-to-end pipeline. \section{Spatiotemporal Alignment} \label{sec:network} The LSH-based similar search outputs pairs of similar fingerprints (or waveforms) from the input, without knowing whether or not the pairs correspond to actual earthquake events. In this section, we show that by incorporating domain knowledge, we are able to significantly reduce the size of the output and prioritize seismic findings in the similarity search results. We briefly summarize the aggregation and filtering techniques on the level of seismic channels, seismic stations and seismic networks introduced in a recent paper in seismology~\cite{networkpaper} (Section~\ref{sec:networkoverview}). We then describe the implementation challenges and our out-of-core adaptations enabling the algorithm to scale to large output volumes (Section~\ref{sec:networkimp}). \subsection{Alignment Overview} \label{sec:networkoverview} The similarity search computes a sparse similarity matrix $\mathcal{M}$, where the non-zero entry $\mathcal{M}[i, j]$ represents the similarity of fingerprints $i$ and $j$. In order to identify weak events in low signal-to-noise ratio settings, seismologists set lenient detection thresholds for the similarity search, resulting in large outputs in practice. For example, one year of input time series data can easily generate 100G of output, or more than 5 billion pairs of similar fingerprints. Since it is infeasible for seismologists to inspect all results manually, we need to automatically filter and align the similar fingerprint pairs into a list of potential earthquakes with high confidence. Based on algorithms proposed in a recent work in seismology~\cite{networkpaper}, we seek to reduce similarity search results at the level of seismic channels, stations and also across a seismic network. Figure~\ref{fig:network_example} gives an overview of the spatiotemporal alignment procedure. \minihead{Channel Level} Seismic channels at the same station experience ground movements at the same time. Therefore, we can directly merge detection results from each channel of the station by summing the corresponding similarity matrix. Given that earthquake-triggered fingerprint matches tend to register at multiple channels whereas matches induced by local noise might only appear on one channel, we can prune detections by imposing a slightly higher similarity threshold on the combined similarity matrix. This is to make sure that we include either matches with high similarity, or weaker matches registered at more than one channel. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figs/network.jpg} \vspace{-1em} \caption{The alignment procedure combines similarity search outputs from all channels in the same station (Channel Level), groups similar fingerprint matches generated from the same pair of reoccurring earthquakes (Station Level), and checks across seismic stations to reduce false positives in the final detection list (Network Level).} \label{fig:network_example} \end{figure} \minihead{Station Level} Given a combined similarity matrix for each seismic station, domain scientists have found that earthquake events can be characterized by thin diagonal shaped clusters in the matrix, which corresponds to a group of similar fingerprint pairs separated by a constant offset~\cite{networkpaper}. The constant offset represents the time difference, or the inter-event time, between a pair of reoccurring earthquake events. One pair of reoccurring earthquake events can generate multiple fingerprint matches in the similarity matrix, since event waveforms are longer than a fingerprint time window. We exclude ``self-matches" generated from adjacent/overlapping fingerprints that are not attributable to reoccurring earthquakes. After grouping similar fingerprint pairs into clusters of thin diagonals, we reduce each cluster to a few summary statistics, such as the bounding box of the diagonal, the total number of similar pairs in the bounding box, and the sum of their similarity. Compared to storing every similar fingerprint pair, the clusters and summary statistics significantly reduce the size of the output. \minihead{Network Level} Earthquake signals also show strong temporal correlation across the seismic network, which we exploit to further suppress non-earthquake matches. Since an earthquake's travel time is only a function of its distance from the source but not of the magnitude, reoccurring earthquakes generated from the same source take a fixed travel time from the source to the seismic stations on each occurrence. Assume that an earthquake originated from source $X$ takes $\delta t_A$ and $\delta t_B$ to travel to seismic stations $A$ and $B$ and that the source generates two earthquakes at time $t_1$ and $t_2$ (Figure~\ref{fig:interevent}). Station $A$ experiences the arrivals of the two earthquakes at time $t_1 + \delta t_A$ and $t_2 + \delta t_A$, while station $B$ experiences the arrivals at $t_1 + \delta t_B$ and $t_2 + \delta t_B$. The inter-event time $\Delta t$ of these two earthquake events is independent of the location of the stations: \[\Delta t = (t_2 + \delta t_A) - (t_1 + \delta t_A) = (t_2 + \delta t_B) - (t_1 + \delta t_B) = t_2 - t_1.\] This means that in practice, diagonals with the same offset $\Delta t$ and close starting times at multiple stations can be attributed to the same earthquake event. We require a pair of earthquake events to be observed at more than a user-specified number of stations in order to be considered as a detection. On a run with 7 to 10 years of time series data from 11 seismic stations (27 channels), the postprocessing procedure effectively reduced the output from more than 2 Terabytes of similar fingerprint pairs to around 30K timestamps of potential earthquakes. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figs/interevent.jpg} \vspace{-1.5em} \caption{Earthquakes from the same seismic sources has a fixed travel time to each seismic station (e.g. $\delta t_A$, $\delta t_B$ in the figure). The inter-event time between two occurrences of the same earthquake is invariant across seismic stations.} \label{fig:interevent} \end{figure} \subsection{Implementation and Optimization} \label{sec:networkimp} The volume of similarity search output poses serious challenges for the alignment procedure, as we often need to process results larger than the main memory of a single node. In this subsection, we describe our implementation and the new out-of-core adaptations of the algorithm that enable the scaling to large output volumes. \minihead{Similarity search output format} The similarity search produces outputs that are in the form of triplets. A triplet $(dt, idx1, sim)$ is a non-zero entry in the similarity matrix, which represents that fingerprint $idx1$ and $(idx1 + dt)$ are hashed into the same bucket $sim$ times (out of $t$ independent trials). We use $sim$ as an approximation of the similarity between the two fingerprints. \minihead{Channel} First, given outputs of similar fingerprint pairs (or the non-zero entries of the similarity matrix) from different channels at the same station, we want to compute the combined similarity matrix with only entries above a predefined threshold. Na\"ively, we could update a shared hashmap of the non-zero entries of the similarity matrix for each channel in the station. However, since the hashmap might not fit in the main memory on a single machine, we utilize the following sort-merge-reduce procedure instead: \begin{enumerate}[noitemsep,topsep=.5em] \item In the sorting phase, we perform an external merge sort on the outputs from each channel, with $dt$ as the primary sort key and $idx1$ as the secondary sort key. That is, we sort the similar fingerprint pairs first by the diagonal that they belong to in the similarity matrix, and within the diagonals, by the start time of the pairs. \item In the merging phase, we perform a similar external merge sort on the already sorted outputs from each channel. This is to make sure that all matches generated by the same pair of fingerprint $idx1$ and $idx1 + dt$ at different channels can be concentrated in consecutive rows of the merged file. \item In the reduce phase, we traverse through the merged file and combine the similarity score of consecutive rows of the file that share the same $dt$ and $idx1$. We discard results that have combined similarity smaller than the threshold. \end{enumerate} \minihead{Station} Given a combined similarity matrix for each seismic station, represented in the form of its non-zero entries sorted by their corresponding diagonals and starting time, we want to cluster fingerprint matches generated by potential earthquake events, or cluster non-zero entries along the narrow diagonals in the matrix. We look for sequences of detections (non-zero entries) along each diagonal $dt$, where the largest gap between consecutive detections is smaller than a predefined gap parameter. Empirically, permitting a gap help ensure an earthquake's P and S wave arrivals are assigned to the same cluster. Identification of the initial clusters along each diagonal $dt$ requires a linear pass through the similarity matrix. We then interactively merge clusters in adjacent diagonals $dt-1$ and $dt+1$, with the restriction that the final cluster has a relatively narrow width. We store a few summary statistics for each cluster (e.g. the cluster's bounding box, the total number of entries) as well as prune small clusters and isolated fingerprint matches, which significantly reduces the output size. The station level clustering dominates the runtime in the spatiotemporal alignment. In order to speed up the clustering, we partition the similarity matrix according to the diagonals, or ranges of $dt$s of the matched fingerprints, and perform clustering in parallel on each partition. A na\"ive equal-sized partition of the similarity matrix could lead to missed detections if a cluster split into two partitions gets pruned in both due to the decrease in size. Instead, we look for proper points of partition in the similarity matrix where there is a small gap between neighboring occupied diagonals. Again, we take advantage of the ordered nature of similarity matrix entries. We uniformly sample entries in the similarity matrix, and for every pair of neighboring sampled entries, we only check the entries in between for partition points if the two sampled entries lie on diagonals far apart enough to be in two partitions. Empirically, a sampling rate of around 1\% works well for our datasets in that most sampled entries are skipped because they are too close to be partitioned. \minihead{Network} Given groups of potential events at each station, we perform a similar summarization across the network in order to identify subsets of the events that can be attributed to the same seismic source. In principle, we could also partition and parallelize the network detection. In practice, however, we found that the summarized event information at each station is already small enough that it suffices to compute in serial. \section{Fingerprint Extraction} \label{sec:fp} In this section, we describe the fingerprint extraction step that encodes time-frequency features of the input time series into compact binary vectors for similarity search. We begin with an overview of the fingerprinting algorithm~\cite{FASTFingerPrint} and the benefits of using fingerprints in place of the time series (Section~\ref{sec:fpoverview}). We then describe a new optimization that parallelizes and accelerates the fingerprinting generation via sampling (Section~\ref{sec:mad}). \subsection{Fingerprint Overview} \label{sec:fpoverview} Inspired by the success of feature extraction techniques for indexing audio snippets~\cite{FASTFingerPrint}, fingerprint extraction step transforms continuous time series data into compact binary vectors (fingerprints) for similarity search. Each fingerprint encodes representative time-frequency features of the time series. The Jaccard similarity of two fingerprints, defined as the size of the intersection of the non-zero entries divided by the size of the union, preserves the waveform similarity of the corresponding time series segments. Compared to directly computing similarity on the time series, fingerprinting introduces frequency-domain features into the detection and provides additional robustness against translation and small variations~\cite{FASTFingerPrint}. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figs/fp.pdf} \caption{The fingerprinting algorithm encodes time-frequency features of the original time series into compact binary vectors.} \label{fig:fingerprint} \end{figure} Figure~\ref{fig:fingerprint} illustrates the individual steps of fingerprinting: \begin{enumerate}[topsep=2pt] \item \textbf{Spectrogram} Compute the spectrogram, a time-frequency representation, of the time series. Slice the spectrogram into short overlapping segments using a sliding window and smooth by downsampling each segment into a spectral image of fixed dimensions. \item \textbf{Wavelet Transform} Compute two-dimensional discrete Haar wavelet transform on each spectral image. The wavelet coefficients are a lossy compression of the spectral images. \item \textbf{Normalization} Normalize each wavelet coefficient by its median and the median absolute deviation (MAD) on the full, background dominated dataset. \item \textbf{Top coefficient} Extract the top K most anomalous wavelet coefficients, or the largest coefficients after MAD normalization, from each spectral image. By selecting the most anomalous coefficients, we focus only on coefficients that are most distinct from coefficients that characterize noise, which empirically leads to better detection results. \item \textbf{Binarize} Binarize the signs and positions of the top wavelet coefficients. We encode the sign of each normalized coefficient using 2 bits: $-1 \to$ 01, 0 $\to$ 00, 1 $\to$ 10. \end{enumerate} \subsection{Optimization: MAD via sampling} \label{sec:mad} The fingerprint extraction is implemented via scientific modules such as \texttt{scipy}, \texttt{numpy} and \texttt{PyWavelets} in Python. While its runtime grows linearly with input size, fingerprinting ten years of time series data can take several days on a single core. In the unoptimized procedure, normalizing the wavelet coefficients requires two full passes over the data. The first pass calculates the median and the MAD\footnote{For $X = \{x_1, x_2, ..., x_n\}$, the MAD is defined as the median of the absolute deviations from the median: $MAD = median(|x_i - median(X)|)$} for each wavelet coefficient over the whole population, and the second pass normalizes the wavelet representation of each fingerprint accordingly. Given the median and MAD for each wavelet coefficient, the input time series can be partitioned and normalized in parallel. Therefore, the computation of the median and MAD remains the runtime bottleneck. We accelerate the computation by approximating the true median and MAD with statistics calculated from a small random sample of the input data. The confidence interval for MAD with a sample size of $n$ shrinks with $n^{1/2}$~\cite{madci}. We empirically find that, on one month of input time series data, sampling provides an order of magnitude speedup with almost no loss in accuracy. For input time series of longer duration (e.g. over a year), sampling 1\% or less of the input can suffice. We further investigate the trade-off between speed and accuracy under different sampling rates in the evaluation (Section~\ref{eval:params}, Appendix~\ref{appendix:eval}). \section{Background} \label{sec:bg} With the deployment of denser and increasingly sensitive sensor arrays, seismology is experiencing a rapid growth of high-resolution data~\cite{array}. Seismic networks with up to thousands of sensors have been recording years of continuous seismic data streams, typically at 100Hz frequencies. The rising data volume has fueled strong interest in the seismology community to develop and apply scalable data-driven algorithms that improve the monitoring and prediction of earthquake events~\cite{dmreview,PCA,myshake}. In this work, we focus on the problem of detecting new, low-magnitude earthquakes from historical seismic data. Earthquakes, which are primarily caused by the rupture of geological faults, radiate energy that travels through the Earth in the form of seismic waves. Seismic waves induce ground motion that is recorded by seismometers. Modern seismometers typically include 3 components that measure simultaneous ground motion along the north-south, east-west, and vertical axes. Ground motions along each of these three axes are recorded as a separate \emph{channel} of time series data. Channels capture complementary signals for different seismic waves, such as the P-wave and the S-wave. The P-waves travel along the direction of propagation, like sound, while the S-waves travel perpendicular to the direction of propagation, like ocean waves. The vertical channel, therefore, better captures the up and down motions caused by the P-waves while the horizontal channels better capture the side to side motions caused by the S-waves. P-waves travel the fastest and are the first to arrive at seismic stations, followed by the slower but usually larger amplitude S-waves. Hence, the P-wave and S-wave of an earthquake typically register as two ``big wiggles" on the ground motion measurements (Figure~\ref{fig:similar_waveforms}). These impulsive arrivals of seismic waves are example characteristics of earthquakes that seismologists look for in the data. While it is easy for human eyes to identify large earthquakes on a single channel, accurately detecting small earthquakes usually requires looking at data from multiple channels or stations. These low-magnitude earthquakes pose challenges for conventional methods for detection, which we outline below. Traditional energy-based earthquake detectors such as a short-term average (STA)/long-term average (LTA) identify earthquake events by their impulsive, high signal-to-noise P-wave and S-wave arrivals. However, these detectors are prone to high false positive and false negative rates at low magnitudes, especially with noisy backgrounds~\cite{staltalow}. Template matching, or the waveform cross-correlation with template waveforms of known earthquakes, has proven more effective for detecting known seismic signals in noisy data~\cite{template1,template2}. However, the method relies on template waveforms of prior events and is not suitable for discovering events from unknown sources. As a result, almost all earthquakes greater than magnitude 5 are detected~\cite{mag5}. In comparison, an estimated 1.5 million earthquakes with magnitude between 2 and 5 are not detected by conventional means, and 1.3 million of these are between magnitude 2 and 2.9. The estimate is based on the magnitude frequency distribution of earthquakes~\cite{grlaw}. We are interested in detecting these low-magnitude earthquakes missing from public earthquake catalogs to better understand earthquake mechanics and sources, which inform seismic hazard estimates and prediction~\cite{smalleq1,smalleq2,smalleq3,smalleq4}. The earthquake detection pipeline we study in the paper is an unsupervised and data-driven approach that does not rely on supervised (i.e., labeled) examples of prior earthquake events, and is designed to complement existing, supervised detection methods. As in template matching, the method we optimize takes advantage of the high similarity between waveforms generated by reoccurring earthquakes. However, instead of relying on waveform templates from only known events, the pipeline leverages the recurring nature of seismic activities to detect similar waveforms in time and across stations. To do so, the pipeline performs an all-pair time series similarity search, treating each segment of the input waveform data as a ``template" for potential earthquakes. The proposed approach can not detect an earthquake that occurs only once and is not similar enough to any other earthquakes in the input data. Therefore, to improve detection recall, it is critical to be able to scale the analysis to input data with a longer duration (e.g., years instead of weeks or months). \section{Related Work} In this section, we address related work in earthquake detection, LSH-based applications and time series similarity search. \minihead{Earthquake Detection} The original FAST work appeared in the seismology community, and has proven a useful tool in scientific discovery~\cite{FAST, FASTlarge}. In this paper, we present FAST to a database audience for the first time, and report on both the pipeline composition and optimization from a computational perspective. The results presented in this paper are the result of over a year of collaboration between our database research group and the Stanford earthquake seismology research group. The optimizations we present in this paper and the resulting scalability results of the optimized pipeline have not previously been published. We believe this represents a useful and innovative application of LSH to a real domain science tool that will be of interest to both the database community and researchers of LSH and time-series analytics. The problem of earthquake detection is decades old~\cite{oldtextbook}, and many classic techniques---many of which are in use today---were developed for an era in which humans manually inspected seismographs for readings~\cite{stalta1, stalta2}. With the rise of machine learning and large-scale data analytics, there has been increasing interest in further automating these techniques. While FAST is optimized to find many small-scale earthquakes, alternative approaches in the seismology community utilize template matching~\cite{template1, template2}, social media~\cite{twitter}, and machine learning techniques~\cite{nn, svm} to detect earthquakes. Most recently, with sufficient training data, supervised approaches have shown promising results of being able to detect non-repeating earthquake events~\cite{convquake}. In contrast, our LSH-based detection method does not rely on labeled earthquake events and detects reoccurring earthquake events. In the evaluation, we compare against two supervised methods~\cite{weasel,convquake} and show that our unsupervised pipeline is able to detect qualitatively different events from existing earthquake catalogs. \minihead{Locality Sensitive Hashing} In this work, we perform a detailed case study of the practical challenges and the domain-specific solutions of applying LSH to the field of seismology. We do not contribute to the advance of the state-of-the-art LSH algorithms; instead, we show that classic LSH techniques, combined with domain-specific optimizations, can lead to scientific discoveries when applied at scale. Existing work shows that LSH performance is sensitive to key parameters such as the number of hash functions~\cite{lshtextsearch, lshmodel}; we provide supporting evidence and analysis on the performance implication of LSH parameters in our application domain. In addition to the core LSH techniques, we also present nontrivial preprocessing and postprocessing steps that enable an end-to-end detection pipeline, including spatiotemporal alignment of LSH matches. Our work targets CPU workloads, complementing existing efforts that speed up similarity search on GPUs~\cite{searchgpu}. To preserve the integrity of the established science pipeline, we focus on optimizing the existing MinHash based LSH rather than replacing it with potentially more efficient LSH variants such as LSH forest~\cite{lshforest} and multi-probe LSH ~\cite{multiprobe}. While we share observations with prior work that parallelizes and distributes a different LSH family~\cite{twitterlsh}, we present the unique challenges and opportunities of optimizing MinHash LSH in our application domain. We provide performance benchmarks against alternative similarity search algorithms in the evaluation, such as set similarity joins~\cite{setsimilarity} and an alternative LSH library based on recent theoretical advances in LSH for cosine similarity~\cite{falconn}. We believe the resulting experience report, as well as our open source implementation, will be valuable to researchers developing LSH techniques in the future. \minihead{Time Series Analytics} Time series analytics is a core topic in large-scale data analytics and data mining~\cite{clusteringsurvey, tsbench, classificationsurvey}. In our application, we utilize time series similarity search as a core workhorse for earthquake detection. There are a number of distance metrics for time series~\cite{distancesurvey}, including Euclidean distance and its variants~\cite{lp}, Dynamic Time Warping~\cite{trillion}, and edit distance~\cite{lcss}. However, our input time series from seismic sensors is high frequency (e.g. 100Hz) and often noisy. Therefore, small time-shifts, outliers and scaling can result in large changes in time-domain metrics~\cite{haar}. Instead, we encode time-frequency features of the input time series into binary vectors and focus on the Jaccard similarity between the binary feature vectors. This feature extraction procedure is an adaptation of the Waveprint algorithm~\cite{waveprint} initially designed for audio data; the key modification made for seismic data was to focus on frequency features that are the most discriminative from background noise, such that the average similarity between non-seismic signals is reduced~\cite{FASTFingerPrint}. An alternative binary representation models time series as points on a grid, and uses the non-empty grid cells as a set representation of the time series~\cite{setbased}. However, this representation does not take advantage of the physical properties distinguishing background from seismic signals. \section{LSH-based Similarity Search} \label{sec:search} In this section, we present the time series similar search step based on LSH. We start with a description of the algorithm and the baseline implementation (Section~\ref{sec:searchimpl}), upon which we build the optimizations. Our contributions include: an optimized hash signature generation procedure (Section~\ref{sec:hashgen}), an empirical analysis of the impact of hash collisions and LSH parameters on query performance (Section~\ref{sec:searchparam}), partition and parallelization of LSH that reduce the runtime and memory usage (Section~\ref{sec:searchpart}), and finally, two domain-specific filters that improve both the performance and detection quality of the search (Section~\ref{sec:noise}). \subsection{Similarity Search Overview} \label{sec:searchimpl} Reoccurring earthquakes originated from nearby seismic sources appear as near-identical waveforms at the same seismic station. Given continuous ground motion measurements from a seismic station, our pipeline identifies similar time series segments from the input as candidates for reoccurring earthquake events. Concretely, we perform an approximate similarity search via MinHash LSH on the binary fingerprints to identify all pairs of fingerprints whose Jaccard similarity exceeds a predefined threshold~\cite{minhash}. MinHash LSH performs a random projection of high-dimensional data into lower dimensional space, hashing similar items to the same hash table ``bucket" with high probability (Figure~\ref{fig:lsh}). Instead of performing a na\"ive pairwise comparisons between all fingerprints, LSH limits the comparisons to fingerprints sharing the same hash bucket, significantly reducing the computation. The ratio of the average number of comparisons per query to the size of the dataset, or \emph{selectivity}, is a machine-independent proxy for query efficiency~\cite{lshmodel}. \minihead{Hash signature generation} The MinHash of a fingerprint is the first non-zero element of the fingerprint under a given random permutation of its elements. The permutation is defined by a hash function mapping fingerprint elements to random indices. Let $p$ denote the collision probability of a hash signature generated with a single hash function. By increasing the number of hash functions $k$, the collision probability of the hash signature decreases to $p^k$~\cite{mmds}. \minihead{Hash table construction} Each hash table stores an independent mapping of fingerprints to hash buckets. The tables are initialized by mapping hash signatures to a list of fingerprints that share the same signature. Empirically, we find that using $t=100$ hash tables suffices for our application, and there is little gain in further increasing the number of hash tables. \minihead{Search} The search queries the hash tables for each fingerprint's near neighbor candidates, or other fingerprints that share the query fingerprint's hash buckets. We keep track of the number of times the query fingerprint and candidates have matching hash signature in the hash tables, and output candidates with matches above a predefined threshold. The number of matches is also used as a proxy for the confidence of the similarity in the final step of the pipeline. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/lsh-small.pdf} \vspace{-1.5em} \caption{Locality-sensitive hashing hashes similar items to the same hash ``bucket" with high probability. } \label{fig:lsh} \end{figure} \subsection{Optimization: Hash signature generation} \label{sec:hashgen} In this subsection, we present both memory access pattern and algorithmic improvements to speed up the generation of hash signatures. We show that, together, the optimizations lead to an over 3$\times$ improvement in hash generation time (Section~\ref{sec:e2e}). Similar to observations made for SimHash (a different hash family for angular distances)~\cite{twitterlsh}, a na\"ive implementation of the MinHash generation can suffer from poor memory locality due to the sparsity of input data. SimHash functions are evaluated as a dot product between the input and hash mapping vectors, while MinHash functions are evaluated as a minimum of hash mappings corresponding to non-zero elements of the input. For sparse input, both functions access scattered, non-contiguous elements in the hash mapping vector, causing an increase in cache misses. We improve the memory access pattern by blocking the access to the hash mappings. We use dimensions of the fingerprint, rather than hash functions, as the main loop for each fingerprint. As a result, the lookups for each non-zero element in the fingerprint are blocked into rows in the hash mapping array. For our application, this loop order has the additional advantage of exploiting the high overlap (e.g. over 60\% in one example) between neighboring fingerprints. The overlap means that previously accessed elements in hash mappings are likely to get reused while in cache, further improving the memory locality. In addition, we speed up the hash signature generation by replacing MinHash with Min-Max hash. MinHash only keeps the minimum value for each hash mapping, while Min-Max hash keeps both the min and the max. Therefore, to generate hash signatures with similar collision probability, Min-Max hash reduces the number of required hash functions to half. Previous work showed the Min-Max hash is an unbiased estimator of pairwise Jaccard similarity, and achieves similar and sometimes smaller mean squared error (MSE) in estimating pairwise Jaccard similarity in practice~\cite{minmaxhash}. We include pseudocode for the optimized hash signature calculation in Appendix~\ref{appendix:hash} of this report. \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{figs/imba.pdf} \vspace{-1em} \caption{Probability that each element in the fingerprint is equal to 1, averaged over 15.7M fingerprints, each of dimension 8192, generated from a year of time series data. The heatmap shows that some elements of the fingerprint are much more likely to be non-zero compared to others. } \label{fig:imbalance} \end{figure} \subsection{Optimization: Alleviating hash collisions} \label{sec:searchparam} Perhaps surprisingly, our initial LSH implementation demonstrated poor scaling with the input size: with a 5$\times$ increase in input, the runtime increases by 30$\times$. In this subsection, we analyze the cause of LSH performance degradation and the performance implications of core LSH parameters in our application. \minihead{Cause of hash collisions} Poor distribution of hash signatures can lead to large LSH hash buckets or high query \emph{selectivity}, significantly degrading the performance of the similarity search~\cite{lshforest, lshskew}. For example, in the extreme case when all fingerprints are hashed into a single bucket, the \emph{selectivity} equals 1 and the LSH performance is equivalent to that of the exhaustive $O(n^2)$ search. Our input fingerprints encode physical properties of the waveform data. As a result, the probability that each element in the fingerprint is non-zero is highly non-uniform (Figure~\ref{fig:imbalance}). Moreover, fingerprint elements are not necessarily independent, meaning that certain fingerprint elements are likely to co-occur: given an element $a_i$ is non-zero, the element $a_j$ has a much higher probability of being non-zero ($\mathbb{P}[a_i = 1, a_j = 1] > \mathbb{P}[a_i = 1] \times \mathbb{P}[a_j = 1]$). This correlation has a direct impact on the collision probability of MinHash signatures. For example, if a hash signature contains $k$ independent MinHash of a fingerprint and two of the non-zero elements responsible for the MinHash are dependent, then the signature has effectively similar collision probability as the signature with only $k-1$ MinHash . In other words, more fingerprints are likely to be hashed to the same bucket under this signature. For fingerprints shown in Figure~\ref{fig:imbalance}, the largest 0.1\% of the hash buckets contain an average of 32.9\% of the total fingerprints for hash tables constructed with $6$ hash functions. \minihead{Performance impact of LSH parameters} The precision and recall of the LSH can be tuned via two key parameters: the number of hash functions $k$ and the number of hash table matches $m$. Intuitively, using $k$ hash functions is equivalent to requiring two fingerprints agree at $k$ randomly selected non-zero positions. Therefore, the larger the number of hash functions, the lower the probability of collision. To improve recall, we increase the number of independent permutations to make sure that similar fingerprints can land in the same hash bucket with high probability. Formally, given two fingerprints with Jaccard similarity $s$, the probability that with $k$ hash functions, the fingerprints are hashed to the same bucket at least $m$ times out of $t=100$ hash tables is: \[\mathbb{P}[s] = 1 - \sum_{i = 0} ^{m - 1} [\binom{t}{i}(1 - s^{k})^{t-i}(s^{k})^i].\] The probability of detection success as a function of Jaccard similarity has the form of an S-curve (Figure~\ref{fig:prob}). The S-curve shifts to the right with the increase in the number of hash functions $k$ or the number of matches $m$, increasing the Jaccard similarity threshold for LSH. Figure~\ref{fig:prob} illustrates a near-identical probability of success curve under different parameter settings. Due to the presence of correlations in the input data, LSH parameters with the same theoretically success probability can have vastly different runtime in practice. Specifically, as the number of hash functions increases, the expected average size of hash buckets decreases, which can lead to an order of magnitude speed up in the similarity search for seismic data in practice. However, to keep the success probability curve constant with increased hash functions, the number of matches needs to be lowered, which increases the probability of spurious matches. These spurious matches can be suppressed by scaling up the number of total hash tables, at the cost of larger memory usage. We further investigate the performance impact of LSH parameters in the evaluation. \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{figs/success.pdf} \vspace{-1em} \caption{Theoretical probability of a successful search versus Jaccard similarity between fingerprints ($k$: number of hash functions, $m$: number of matches). Different LSH parameter settings can have near identical detection probability with vastly different runtime.} \label{fig:prob} \end{figure} \subsection{Optimization: Partitioning} \label{sec:searchpart} In this subsection, we describe the partition and parallelization of the LSH that further reduce its runtime and memory footprint. \minihead{Partition} Using a 1-second lag for adjacent fingerprints results in around 300M total fingerprints for 10 years of time series data. Given a hash signature of 64 bits and 100 total hash tables, the total size of hash signatures is approximately 250 GB. To avoid expensive disk I/O, we also want to keep all hash tables in memory for lookups. Taken together, this requires several hundred gigabytes of memory, which can exceed available main memory. To scale to larger input data on a single node with the existing LSH implementation, we perform similarity search in partitions. We evenly partition the fingerprints and populate the hash tables with one partition at a time, while still keeping the lookup table of fingerprints to hash signatures in memory. During query, we output matches between fingerprints in the current partition (or in the hash tables) with all other fingerprints and subsequently repeat this process for each partition. The partitioned search yields identical results to the original search, with the benefit that only a subset of the fingerprints are stored in the hash tables in memory. We can partition the lookup table of hash signatures similarly to further reduce memory. We illustrate the performance and memory trade-offs under different numbers of partitions in Section~\ref{eval:params}. The idea of populating the hash table with a subset of the input could also be favorable for performing a small number of nearest neighbor queries on a large dataset, e.g., a thousand queries on a million items. There are two ways to execute the queries. We can hash the full dataset and then perform a thousand queries to retrieve near neighbor candidates in each query item's hash buckets; alternatively, we can hash only the query items and for every other item in the dataset, check whether it is mapped to an existing bucket in the table. While the two methods yield identical query results, the latter could be $8.6\times$ faster since the cost of initializing the hash table dominates that of the search. It is possible to further improve LSH performance and memory usage with the more space efficient variants such as multi-probe LSH~\cite{multiprobe}. However, given that the alignment step uses the number of hash buckets shared between fingerprints as a proxy for similarity, and that switching to a multi-probe implementation would alter this similarity measure, we preserve the original LSH implementation for backwards compatibility with FAST. We compare against alternative LSH implementations and demonstrate the potential benefits of adopting multi-probe LSH in the evaluation (Section~\ref{sec:falconn}). \minihead{Parallelization} Once the hash mappings are generated, we can easily partition the input fingerprints and generate the hash signatures in parallel. Similarly, the query procedure can be parallelized by running nearest neighbor queries for different fingerprints and outputting results to files in parallel. We show in Section~\ref{eval:params} that the total hash signature generation time and similarity search time reduces near linearly with the number of processes. \subsection{Optimization: Domain-specific filters} \label{sec:noise} \begin{figure} \centering \includegraphics[width=\linewidth]{figs/noise.pdf} \vspace{-1.5em} \caption{The short, three-spike pattern is an example of similar and repeating background signals not due to seismic activity. These repeating noise patterns cause scalability challenges for LSH.} \label{fig:ltz_noise} \end{figure} Like many other sensor measurements, seismometer readings can be noisy. In this subsection, we address a practical challenge of the detection pipeline, where similar non-seismic signals dominate seismic findings in runtime and detection results. We show that by leveraging domain knowledge, we can greatly increase both the efficiency and the quality of the detection. \minihead{Filtering irrelevant frequencies} Input time series may contain station-specific narrow-band noise that repeats over time. Similar time series segments generated by noise can be identified as near neighbors, or earthquake candidates in the similarity search. To suppress false positives generated from noise, we apply a bandpass filter to exclude frequency bands that show high average amplitudes and repeating patterns while containing low seismic activities. The bandpass filter is selected manually by examining short spectrogram samples, typically an hour long, of the input time series, based on seismological knowledge. Typical bandpass filter ranges span from 2 to 20Hz. Prior work~\cite{FAST, FASTlarge, FASTFingerPrint, networkpaper} proposes the idea of filtering irrelevant frequencies, but only on input time series. We extend the filter to the fingerprinting algorithm and cutoff spectrograms at the corner of the bandpass filter, which empirically improves detection performance. We perform a quantitative evaluation of the impact of bandpass filters on both the runtime and result quality (Section~\ref{eval:domain}). \minihead{Removing correlated noise} Repeating non-seismic signals can also occur in frequency bands containing rich earthquake signals. Figure~\ref{fig:ltz_noise} shows an example of strong repeating background signals from a New Zealand seismic station. A large cluster of repeating signals with high pairwise similarity could produce nearest neighbor matches that dominate the similarity search, leading to a 10$\times$ increase in runtime and an over 100$\times$ increase in output size compared to results from similar stations. This poses both problems for computational scalability and for seismological interpretability. We develop an occurrence filter for the similarity search by exploiting the rarity of the earthquake signals. Specifically, if a specific fingerprint is generating too many nearest neighbor matches in a short duration of time, we can be fairly confident that it is not an earthquake signal. This observation holds in general except for special scenarios such as volcanic earthquakes~\cite{volcanic}. During the similarity search, we dynamically generate a list of fingerprints to exclude from future search. If the number of near neighbor candidates a fingerprint generates is larger than a predefined percentage of the total fingerprints, we exclude this fingerprint as well as its neighbors from future similarity search. To capture repeating noise over a short duration of time, the filter can be applied on top of the partitioned search. In this case, the filtering threshold is defined as the percentage of fingerprints in the current partition, rather than in the whole dataset. On the example dataset above, this approach filtered out around 30\% of the total fingerprints with no false positives. We evaluate the effect of the occurrence filter on different datasets under different filtering thresholds in Section~\ref{eval:domain}.
1710.01924
\section{Introduction} Ingleton's condition~\cite{Ingleton1971} is a well-known linear inequality that holds for representable matroids but not matroids in general; it states that for all $A,B,C,D \subseteq E$ in a representable matroid $M = (E,r)$, \begin{multline*} r(A \cup B) + r(A \cup C) + r(A \cup D) + r(B \cup C) + r(B \cup D) \\ \ge r(A) + r(B) + r(A \cup B \cup C) + r(A \cup B \cup D) + r(C \cup D). \end{multline*} An arbitrary matroid is \emph{Ingleton} if the above inequality is satisfied for all choices of $A,B,C,D$. The class of Ingleton matroids is closed under minors and duality (see, for example, Lemmas~3.9 and~4.5 in~\cite{Cameron14}) and clearly all representable matroids are Ingleton. A natural question is to what extent a converse of this last statement holds: that is, do Ingleton matroids tend to be representable? We prove here that they do not. For $n \ge 12$, the number of representable matroids on $[n]$ is at most $2^{0.25n^3}$~\cite{Nelson2016}; our main result is the following. \begin{theorem}\label{main} For all sufficiently large $n$ and all $0 < r < n$, the number of Ingleton matroids with ground set $[n]$ is at least $2^{0.486 \tfrac{\log (r(n-r))}{r(n-r)}\binom{n}{r}}$. \end{theorem} Even when $r = 4$, this eclipses the upper bound on the number of representable matroids on $[n]$ with no restriction on rank; thus, almost all Ingleton matroids are non-representable. When $r = \lfloor n/2 \rfloor$, the above formula is around $2^{\frac{1.94}{n^2}\binom{n}{n/2}}$ which is doubly exponential in $n$, and even somewhat resembles the number of all matroids on $[n]$, which is $2^{\Theta\left(\frac{1}{n}\binom{n}{\lfloor n/2 \rfloor}\right)}$ with a constant between $1$ and $2+o(1)$~\cite{BansalPendavinghvanderPol2015}. We conjecture, however, that general matroids tend not to be Ingleton: \begin{conjecture} There is a constant $c$ such that the number of Ingleton matroids on $[n]$ is $2^{(c+o(1)) \tfrac{\log n}{n^2}\binom{n}{\lfloor n/2 \rfloor}}$. \end{conjecture} Conceivably, the constant $c$ could be equal to $2$, or even be the one of around $1.94$ obtained by our proof. In what follows, we assume some familiarity with matroid theory; see~\cite{Oxley2011}. In particular, a \emph{nonbasis} of a rank-$r$ matroid $M$ is a set in $\binom{E(M)}{r}$ that is not a basis of $M$; and a matroid is \emph{paving} if all its nonbases are circuits and \emph{sparse paving} if $M$ and $M^*$ are both paving. Logarithms are all base-two. As part of the proof of Theorem~\ref{main}, we also characterize exactly which sparse paving matroids are Ingleton, and as a result easily derive the following theorem: \begin{theorem}\label{forty} There are exactly $41$ excluded minors for the class of Ingleton sparse paving matroids: the matroids $U_{0,2}\oplus U_{1,1}$ and $U_{2,2} \oplus U_{0,1}$, and the $39$ rank-$4$ non-Ingleton sparse paving matroids on eight elements. \end{theorem} The fact that this set is even finite is curious; the class of all Ingleton matroids, by contrast, has an infinite set of excluded minors, constructed in~\cite{MayhewNewmanWhittle2009}. In fact, their techniques show that every $\mathbb R$-representable matroid is a minor of an excluded minor for the class of Ingleton matroids. Theorems~\ref{main} and~\ref{forty} together imply that the Ingleton matroids are a `large' minor-closed class of matroids (in the sense of numbering at least $2^{2^n/\mathrm{poly}(n)}$) that omits $39$ different sparse paving matroids. It was conjectured in~\cite{MayhewNewmanWelshWhittle2011} that any minor-closed class not containing all sparse paving matroids is asymptotically vanishing; our result shows that such a class may still be `large'. \section{Representing matroids with few nonbases} \begin{lemma}\label{hall} Let $M$ be a rank-$r$ matroid in which each set $\mathcal{W}$ of nonbases with $|\mathcal{W}| > 1$ satisfies $|\cap \mathcal{W}| \le r - |\mathcal{W}|$. Then $M$ is $\mathbb R$-representable. \end{lemma} \begin{proof} Let $\mathcal{X} = \{X_1, \dotsc, X_t\}$ be the set of nonbases of $M$; note that $0 \le |\cap \mathcal{X}| \le r-t$, so $t \le r$. Let $A$ be an $[r] \times E$ real matrix so that the nonzero entries of $A$ are algebraically independent over $\mathbb Q$, and $A_{i,e} = 0$ if and only if $i \in [t]$ and $e \in X_i$. We prove that $M = M(A)$. It is clear that for every nonbasis $W$ of $M$, the matrix $A[W]$ has a zero row so is singular; it remains to show that $A[B]$ is nonsingular for each basis $B$ of $M$. Let $B$ be a basis of $M$. Consider the bipartite graph $G$ with bipartition $([r],B)$ for which $(i,e)$ is an edge if and only if $A_{i,e} \ne 0$. Note that each $i \in \{t+1, \dotsc, r\}$ has degree $r$ in $G$. For each $S \subseteq [r]$, let $N(S)$ denote the set of vertices in $B$ that are adjacent to a vertex in $S$. We argue that $|N(S)| \ge |S|$ for each $S \subseteq [r]$; it will follow from Hall's theorem that $G$ has a perfect matching. Let $S \subseteq [r]$. If $S \not\subseteq [t]$ then clearly $N(S) = B$ and so $|N(S)| = r \ge |S|$. If $S \subseteq [t]$, then by hypothesis the set $\bigcap_{s \in S}X_s$ has size at most $r-|S|$, so $B$ contains at least $|S|$ elements $e$ for which there is some $s \in S$ with $e \notin X_s$. Each such $e$ is adjacent to $s$, so $|N(S)| \ge |S|$ as required. Therefore $G$ has a perfect matching. Let $B = \{b_1, \dotsc, b_r\}$ and $S_r$ denote the symmetric group on $[r]$; note that $A[B]$ is singular if and only if the determinant $\sum_{\sigma \in S_r}\prod_{i \in [r]}A_{i,b_{\sigma(i)}}$ is zero. This determinant is a polynomial in the entries of $A$, with integer coefficients, whose nonzero monomials are algebraically independent over $\mathbb Q$, and since $G$ has a perfect matching, some monomial is nonzero. It follows that the determinant is nonzero, so $A[B]$ is nonsingular, as required. \end{proof} \begin{lemma}\label{atmostfour} Every matroid with at most four nonbases is $\mathbb R$-representable. \end{lemma} \begin{proof} Let $M$ be a minor-minimal counterexample. Note that $M$ is simple, that $r(M) \ge 3$ , and that $M^*$ is also a minor-minimal counterexample, so $M$ is cosimple with $r^*(M) \ge 3$. If $M$ has an element $e$ in no nonbases, then since $e$ is not a coloop, $M$ is the free extension of $M \!\setminus\! e$ by $e$; since $M \!\setminus\! e$ is $\mathbb R$-representable, so is $M$, a contradiction. So every element is in a nonbasis of $M$. Dually, no element is in all nonbases of $M$. If $M$ has a dependent set $Y$ of size $r(M)-1$, then $Y \cup \{e\}$ is a nonbasis for each $e \in E(M)-Y$. Since no element of $Y$ is in four nonbases, this gives $|E(M)-Y| \le 3$, so $|E(M)|\le r+2$, giving $r^*(M) \le 2$, a contradiction. Therefore every circuit of $M$ is spanning, so $M$ is a paving matroid; dually, $M$ is a sparse paving matroid. If $e \in E(M)$ is in exactly one nonbasis $X$, then $M$ is the principal extension of the flat $X - \{e\}$ in $M \!\setminus\! e$, so $M$ is $\mathbb R$-representable, a contradiction. Therefore every $e \in E(M)$ is in at least two nonbases. Dually, every element lies outside at least two nonbases. Therefore $M$ has exactly four nonbases, and every element is in exactly two of them. If $r(M) = 3$, then $M$ has four triangles, so there are $12$ pairs $(e,T)$ where $T$ is a triangle containing $e$, and every element is in two triangles, so there are also $2|E(M)|$ such pairs $(e,T)$. Thus $|E(M)| = 6$ and so $M$ is $\mathbb R$-representable, a contradiction. Suppose, therefore, that $r(M) \ge 4$. By Lemma~\ref{hall}, we may assume that there is some set $\mathcal{X}$ of nonbases of $M$ with $|\mathcal{X}| > 1$ such that $|\cap \mathcal{X}| > r - |\mathcal{X}|$. Since no element is in three nonbases, if $|\mathcal{X}| > 2$ then $|\cap \mathcal{X}| = 0 \le r - |\mathcal{X}|$, so we must have $|\mathcal{X}| = 2$ and thus there are nonbases $X_1,X_2$ with $|X_1 \cap X_2| = r-1$. This contradicts the fact that $M$ is a sparse paving matroid. \end{proof} \section{Ingleton Matroids} In this section, we use the well-known fact that $H \subseteq \binom{E}{r}$ is the set of nonbases of a sparse paving matroid on $E$ if and only if no two elements of $H$ have intersection of size exactly $r-1$. \begin{lemma}\label{ingletonviolations} Let $M$ be a rank-$r$ sparse paving matroid. Sets $A,B,C,D$ violate the Ingleton inequality in $M$ if and only if there are pairwise disjoint sets $X_1,X_2,X_3,X_4,Y,Z_1,Z_2 \subseteq E(M)$ such that \begin{itemize} \item $|X_i| = 2$ for each $i \in [4]$ while $|Y \cup Z_1 \cup Z_2| = r-4$, \item $A = X_1 \cup Y \cup Z_1 \cup Z_2$, \item $B = X_2 \cup Y \cup Z_1 \cup Z_2$, \item $C = X_3 \cup Y \cup Z_1$, and \item $D = X_4 \cup Y \cup Z_2$, \end{itemize} while each of $A \cup B, A\cup C,A \cup D,B \cup C$ and $B \cup D$ is a circuit-hyperplane of $M$, and $C \cup D$ is a basis. \end{lemma} \begin{proof} If the above conditions are satisfied, then the Ingleton inequality is evidently violated. Conversely, let $A,B,C,D$ violate the Ingleton inequality. For each matroid $N$ with $A \cup B \cup C \cup D \subseteq E(N)$, let \[h_1(N) = r_N(A) + r_N(B) + r_N(A \cup B \cup D) + r_N(A \cup C \cup D) + r_N(C \cup D)\] and \[h_2(N) = r_N(A \cup B) + r_N(A \cup C) + r_N(A \cup D) + r_N(B \cup C) + r_N(B \cup D),\] so $h_1(M) > h_2(M)$ by assumption. \begin{claim} $A \cup B$, $A \cup C$, $A \cup D$, $B \cup C$ and $B \cup D$ are circuit-hyperplanes. \end{claim} \begin{proof}[Proof of claim:] Suppose otherwise. Let $M'$ be obtained from $M$ by relaxing each circuit-hyperplane other than those among the five sets above, so $M'$ is sparse paving and has at most four circuit-hyperplanes. By Lemma~\ref{atmostfour}, $M'$ is $\mathbb R$-representable, so $h_1(M') \le h_2(M')$. By construction, we have $h_2(M') = h_2(M)$, and since $M'$ is freer than $M$, we have $h_1(M') \ge h_1(M)$. Therefore \[h_2(M) < h_1(M) \le h_1(M') \le h_2(M') = h_2(M),\] a contradiction. \end{proof} \begin{claim} $|A| = |B| = r-2$, and the sets $A \cup B \cup C$, $A \cup B \cup D$ and $C \cup D$ are spanning in $M$. \end{claim} \begin{proof}[Proof of claim:] The first claim gives $h_2(M) = 5r-5$, so by assumption $h_1(M) \ge 5r-4$. If $A \cup B \subseteq \cl_M(A)$, then we have \[h_2(M) = r_M(A) + r_M(A \cup B \cup C) + r_M(A \cup B \cup D) + r_M(B \cup C) + r_M(B \cup D)\] and $h_2(M) - h_1(M) = r_M(B \cup C) + r_M(B \cup D) - r_M(B) - r_M(C \cup D) \ge 0$ by submodularity, a contradiction. So $A \cup B \not\subseteq \cl_M(A)$; since $A \cup B$ is a circuit, it follows that $|A| = r_M(A) \le r-2$ and, symmetrically, that $|B| = r_M(B) \le r-2$. Therefore \begin{align*} 3r &\ge r_M(A \cup B \cup C) + r_M(A \cup B \cup D) + r_M(C \cup D) \\ &= h_1(M)-r_M(A)-r_M(B)\\ &\ge (5r-4)-2(r-2) = 3r, \end{align*} so we have equality throughout, and $r_M(A \cup B \cup C) = r_M(A \cup B \cup D) = r_M(C \cup D) = r$ while $|A| = |B| = r-2$. \end{proof} For each nonempty subset $S$ of $\{A,B,C,D\}$, write $J_S$ for the collection of elements belonging to all sets in $S$ but no sets in $\{A,B,C,D\}-S$, and let $n_S = |J_S|$. For example, $n_{AB}$ denotes $|(A \cap B)-(C \cup D)|$ (we omit commas and braces). Since $A \cup C$ and $B \cup C$ are circuit-hyperplanes in a sparse paving matroid, \begin{align*} 2 &\le |(A \cup C) - (B \cup C)| \\ & = |A - (B \cup C)|\\ & = |A-B| - |(A \cap C)-B| \\ & = 2 - |(A \cap C)-B|, \end{align*} so $(A \cap C)-B = \varnothing$, giving $n_{AC} = n_{ACD} = 0$. Using the symmetry between $A$ and $B$ and between $C$ and $D$, we also have $n_{AD} = n_{BC} = n_{BD} = n_{BCD} = 0$. Therefore $n_C + n_{CD} = n_C + n_{CD} + n_{BCD} + n_{BD} = |C-A| = |C \cup A| - |A| = 2$. Since $A \cup C$ and $A \cup D$ are circuit-hyperplanes, we have $2 \le |(A \cup C) - (A \cup D)| = n_D + n_{BD} = n_D = 2-n_{CD}$, from which we get $n_{CD} = 0$ and $n_D = 2$, and symmetrically $n_C = 2$. Moreover $n_A = n_A + n_{AC} + n_{AD} + n_{ACD} = |A-B| = |A \cup B| - |B| = r-(r-2)= 2$, and symmetrically $n_B = 2$. The four undetermined $n_S$ thus far are $n_{AB},n_{ABCD},n_{ABC}$ and $n_{ABD}$; all others have been shown to be zero except $n_A = n_B = n_C = n_D = 2$. Using the fact that $C \cup D$ is spanning, we thus have \[r \le |C \cup D| = n_{ABCD} + n_{ABC} + n_{ABD} + n_C + n_D.\] On the other hand, \[r-2 = |A| = n_{ABCD} + n_{ABC} + n_{ABD} + n_{AB} + n_A;\] since $n_A = n_C = n_D = 2$, these together imply that $n_{AB} = 0$. The above also gives $n_{ABCD} + n_{ABC} + n_{ABD} = r-4$. Now setting $(X_1,X_2,X_3,X_4,Y,Z_1,Z_2) = (J_A,J_B,J_C,J_D,J_{ABCD},J_{ABC},J_{ABD})$ gives the required structure. Finally, we see that $|C \cup D| = n_{ABCD} + n_{ABD} + n_{ABC} + n_C + n_D = (r-4)+4 = r$; since $C \cup D$ is spanning, it must be a basis. \end{proof} A simpler characterisation of these matroids below follows with $K = Z \cup Y_1 \cup Y_2$ and the $P_i$ equal to some ordering of the $X_i$ above. \begin{corollary}\label{symmdiff} Let $M$ be a sparse paving matroid. Then $M$ is non-Ingleton if and only if there are pairwise disjoint sets $P_1,P_2,P_3,P_4,K$ so that $|K| = r-4$ and $|P_i| = 2$ for each $i$, and exactly five of the six sets of the form $K \cup P_i \cup P_j \colon i \ne j$ are circuit-hyperplanes of $M$. \end{corollary} As observed in \cite{Cameron14}, If $M$ is a matroid for which the above condition holds, then it also holds in the eight-element, rank-$4$ matroid $(M / K)|(P_1 \cup P_2 \cup P_3 \cup P_4)$; therefore, every non-Ingleton sparse paving matroid has an eight-element, rank-$4$ non-Ingleton sparse paving matroid as a minor. Mayhew and Royle~\cite{MayhewRoyle2008} showed that there are precisely $39$ such matroids; for every such matroid $N$, the V\'{amos} matroid $V_8$ can be obtained from $N$ by a sequence of circuit-hyperplane relaxations. (We remark that~\cite{MayhewRoyle2008} uses different terminology from ours, calling these matroids `Ingleton non-representable' rather than `non-Ingleton'.) The unique minor-minimal matroids that are not sparse paving are $U_{0,2} \oplus U_{1,1}$ and $U_{2,2} \oplus U_{0,1}$; together these facts imply Theorem~\ref{forty}. \section{Counting Ingleton Matroids} The proof of the following theorem uses techniques from~\cite[Proposition~2.1]{CooperMubayi2014}. \begin{theorem} There exists $n_0$ such that for all $n \ge n_0$ and all $0 < r < n$, the number of rank-$r$ Ingleton matroids with ground set $[n]$ is at least $2^{0.486 \frac{\log (r(n-r))}{r(n-r)} \binom{n}{r}}$. \end{theorem} \begin{proof} We may assume that $2 \le r \le \frac{n}{2}$, since otherwise the theorem is trivial or follows by duality. Write $N = \binom{n}{r}$ and $d = r(n-r)$ for the number of vertices and the valency of the Johnson graph $J(n,r)$. For $x \in \mathbb R$, let $f(x) = 1 - \tfrac{1}{2}x - \tfrac{1}{64}x^4$. We show that if $c > 0$ is a real number and $\gamma < cf(c)$, then there are at least $ 2^{\gamma\frac{\log d}{d}N}$ Ingleton sparse paving matroids of rank $r$ on $[n]$, provided $n$ is sufficiently large; the result as stated follows with $c = 0.95$ and $\gamma = 0.486$. Given $c$ and $\gamma$, let $\alpha$ be such that $\gamma/c < \alpha < f(c)$ and let $\epsilon = f(c) - \alpha$, so $1-f(c) + \epsilon = 1-\alpha$. Set $k = \left\lfloor c \frac{N}{d}\right\rfloor$, so $(1-o(1))\tfrac{c}{d} \le \tfrac{k}{N} \le \tfrac{c}{d}$. Pick a $k$-set $H$ of vertices in $J(n,r)$ uniformly at random from among all $k$-subsets of vertices and write $E(H)$ for the set of unordered pairs of vertices in $H$ joined by an edge in $J(n,r)$, and $e_H$ for $|E(H)|$. \begin{claim}\label{eestimates} $\mathbf{E}(e_H) \le \frac{ck}{2}$ and $\mathbf{Var}(e_H) = o(k^2)$. \end{claim} \begin{proof}[Proof of claim:] We have \[ \mathbf{E}(e_H) = \frac{1}{2} d N \frac{\binom{N-2}{k-2}}{\binom{N}{k}} \le \frac{dk^2}{2N} \le \frac{ck}{2}. \] Let $\Theta$ denote the set of ordered pairs $(e,f)$ of edges of $J(n,r)$. Write $\Theta_j$, $j\in\{0,1,2\}$, for the set of pairs in $\Theta$ that span $4-j$ vertices. Note that $|\Theta| = \frac{1}{4}d^2 N^2$, while $|\Theta_1| = Nd(d-1) \le Nd^2$ and $|\Theta_2| = \frac{1}{2}dN$. Now, using the fact that $\binom{N-\ell}{k-\ell}/\binom{N}{k} = (1+o(1))(k/N)^{\ell}$ for each constant $\ell$, we have \begin{align*} \mathbf{Var}(e_H) &= \sum_{\substack{(e,f) \in \Theta}}\big[\mathbf{Pr}(e,f \in E(H)) -\mathbf{Pr}(e \in E(H))\mathbf{Pr} (f \in E(H))\big]\\ &= |\Theta_0| \left(\frac{\binom{N-4}{k-4}}{\binom{N}{k}}-\frac{\binom{N-2}{k-2}^2}{\binom{N}{k}^2}\right) + |\Theta_1| \left(\frac{\binom{N-3}{k-3}}{\binom{N}{k}}-\frac{\binom{N-2}{k-2}^2}{\binom{N}{k}^2}\right) \\ &\qquad\qquad + |\Theta_2| \left(\frac{\binom{N-2}{k-2}}{\binom{N}{k}}-\frac{\binom{N-2}{k-2}^2}{\binom{N}{k}^2}\right) \\ &\le \frac{1}{4}N^2d^2 o\left(\frac{k^4}{N^4}\right) + Nd^2\left(\frac{k}{N}\right)^3 + \frac{1}{2} d N \left(\frac{k}{N}\right)^2 \\ &= o(d^{-2}N^2) + O(d^{-1}N) + O(d^{-1} N) = o(k^2), \end{align*} since $k = (1+o(1))cN/d$. \end{proof} Let $\Omega$ be the set of all pairs $(\{P_1,P_2,P_3,P_4\},K)$ where $P_1,P_2,P_3,P_4,K$ are pairwise disjoint subsets of $[n]$ with $|P_i|= 2$ and $|K| = r-4$ (note that the collection of $P_i$ is unordered). Now \begin{align*}|\Omega| &= \frac{1}{4!}\binom{8}{2,2,2,2}\binom{n}{8} \binom{n-8}{r-4}\\ &= \frac{8!}{2^4 \cdot 4!}\binom{n}{r}\frac{r!(n-r)!}{8!(r-4)!(n-r-4)!}\\ &\le \frac{d^4}{2^7 \cdot 3}N. \end{align*} For each $\omega \in \Omega$, let $U(\omega) = \{K \cup P_i \cup P_j\colon \{i,j\} \in \binom{[4]}{2}\}$, so $|U(\omega)| = 6$. For each $H$ and each $i \in \{0,\dotsc,6\}$, let $b_{i,H}$ denote the number of $\omega$ in $\Omega$ for which $|H \cap U(\omega)| = i$. \begin{claim}\label{bestimates} $\mathbf{E}(b_{5,H}) \le \frac{c^4k}{64}$ and $\mathbf{E}(b_{6,H}) = o(k)$ while $\mathbf{Var}(b_{5,H}) = o(k^2)$. \end{claim} \begin{proof}[Proof of claim:] The claim is trivial when $r < 4$ since $\Omega$ is empty, so suppose that $r \ge 4$. Given $\omega \in \Omega$, the probability that $|H \cap U(\omega)| = i$ is $\binom{6}{i}\binom{N-6}{k-i}/\binom{N}{k} \le \binom{6}{i}\left(k/N\right)^i \le \binom{6}{i}c^id^{-i}$, so \[\mathbf{E}(b_{i,H}) \le |\Omega|\binom{6}{i}c^id^{-i} \le \binom{6}{i}\frac{c^{i}d^{4-i}}{2^7 \cdot 3}N \le \binom{6}{i}\frac{c^{i-1}d^{5-i}k}{2^7 \cdot 3},\] giving $\mathbf{E}(b_{5,H}) \le \frac{c^4k}{64}$ and $\mathbf{E}(b_{6,H}) = o(k)$ as required. Let $\Pi = \Omega^2$, so $|\Pi| = |\Omega|^2 \le d^8 N^2$. Let $\Pi_0 := \{(\omega, \omega) : \omega \in \Omega\} \subseteq \Pi$, let $\Pi_2$ be the set of all $(\omega_1, \omega_2) \in \Pi$ for which $U(\omega_1) \cap U(\omega_2) = \emptyset$, and let $\Pi_1 = \Pi \setminus (\Pi_0 \cup \Pi_2)$. Since $U(\omega)$ contains $6$ vertices of $J(n,r)$ for each $\omega \in \Omega$, symmetry and a counting argument gives that for each vertex $v$ of $J(n,r)$, we have \[|\{\omega \in \Omega\colon v \in U(\omega)\}| = \frac{6|\Omega|}{N} = O(d^4).\] It follows that $|\Pi_1| = O(d^8 N)$. Call an $\omega \in \Omega$ \emph{bad} for $H$ if $|H \cap U(\omega)| = 5$. Recall that the probability that a given $\omega$ is bad is $6\binom{N-6}{k-5}/\binom{N}{k} = (6+o(1))k^5/N^5$. Note that $\omega$ is determined uniquely by any four sets in $U(\omega)$; it follows that if $(\omega_1,\omega_2) \in \Pi_1$ then $U(\omega_1)$ and $U(\omega_2)$ have at most three sets in common, so if both $\omega_1$ and $\omega_2$ are bad, then $H$ contains at least seven of the sets in $U(\omega_1)$ and $U(\omega_2)$. Since $|U(\omega_1) \cup U(\omega_2)| \le 10$, a pair $(\omega_1,\omega_2) \in \Pi_1$ is thus bad with probability at most $\binom{10}{7}\binom{N-7}{k-7}/\binom{N}{k} \le \binom{10}{7}(k/N)^7 = O(d^{-7})$. If $(\omega_1,\omega_2) \in \Pi_0$ then $\omega_1$ and $\omega_2$ are both bad with probability $(6+o(1))k^5/N^5 = O(d^{-5})$. If $(\omega_1,\omega_2) \in \Pi_2$ then both are bad with probability $36\binom{N-12}{k-10}/\binom{N}{k} = (36+o(1))k^{10}/N^{10}$. Therefore \begin{align*} \mathbf{Var}(b_{5,H}) &= \sum_{(\omega_1,\omega_2) \in \Pi}\big[\mathbf{Pr}(\omega_1,\omega_2 \text{ bad})-\mathbf{Pr}(\omega_1 \text{ bad})\mathbf{Pr}(\omega_2 \text{ bad})\big]\\ &\le |\Pi_2|\left(\frac{(36+o(1))k^{10}}{N^{10}} - \left(\frac{(6 + o(1))k^5}{N^5}\right)^2\right) + |\Pi_0|O(d^{-5}) \\ &\qquad\qquad + |\Pi_1| O(d^{-7}) \\ &\le d^8N^2 o(k^{10}/N^{10}) + O(d^{-1}N) + O(d N) \\ &= o(d^{-2}N^2) + O(dN). \end{align*} Now $d^{-2}N^2 = (1+o(1))k^2$, and, using $4 \le r \le \tfrac{n}{2}$, we have $dN = r(n-r)\binom{N}{r} = o\left(\frac{1}{r^2(n-r)^2}\binom{N}{r}^2\right) = o(k^2)$. It follows that $\mathbf{Var}(b_{5,H}) = o(k^2)$ as required. \end{proof} By the two claims, the random variables $e_H$ and $b_{5,H}$ have means at most $\frac{ck}{2}$ and $\frac{c^4k}{64}$ respectively, and both have standard deviations in $o(k)$; it follows by Chebyshev's inequality that $\mathbf{Pr}(e_H > (\frac{c}{2} + \frac{\epsilon}{3})k) = o(1)$ and $\mathbf{Pr}(b_{5,H} > (\frac{c^4}{64} + \frac{\epsilon}{3})k) = o(1)$. Since $\mathbf{E}(b_{6,H}) \in o(k)$, Markov's inequality gives $\mathbf{Pr}(b_{6,H} > \frac{\epsilon}{6}k) = o(1)$. Therefore, with probability $1-o(1)$, we have \[e_H + b_{5,H} + 2b_{6,H} \le (\tfrac{c}{2} + \tfrac{c^4}{64} + \epsilon)k = (1-f(c) + \epsilon)k = (1-\alpha) k.\] Call a set $W \subseteq \binom{[n]}{r}$ \emph{good} if $e_W = b_{5,W} = b_{6,W} = 0$. Each set $H \subseteq \binom{[n]}{r}$ of size $k$ contains a good set $W$ of size $|H|- e_H - b_{5,H} - 2b_{6,H}$. With probability $1-o(1)$ we have $e_H + b_{5,H} + 2b_{6,H} \le (1-\alpha)k$ and so $|W| \ge k - (1-\alpha)k = \alpha k$; thus there are at least $(1-o(1))\binom{N}{k}$ different choices of $H$ that contain a good set $W$ of size at least $\alpha k$. On the other hand, each good set $W$ of size at least $\alpha k$ is contained in at most $\binom{N}{(1-\alpha)k}$ different $H$; therefore the number of good sets is at least $\nu = (1-o(1))\tbinom{N}{k}/\tbinom{N}{(1-\alpha)k}$. We have \begin{align*} \log \nu &= \log \tbinom{N}{k} - \log \tbinom{N}{(1-\alpha)k} - o(1)\\ &\ge k \log(N/k) - k \log(eN/(1-\alpha)k) - o(1)\\ &=(1-o(1))\alpha k \log(N/k)\\ &=(1-o(1))\alpha c \frac{\log d}{d}N. \end{align*} But for large $n$ we have $(1-o(1))\alpha c > \gamma$, so there are at least $2^{\gamma \frac{\log d}{d}N}$ good stable sets. By Corollary~\ref{symmdiff}, each such set is the collection of circuit-hyperplanes of an Ingleton sparse paving matroid of rank $r$ on ground set $[n]$; the theorem follows. \end{proof} We have attempted to optimize the constant $0.486...$ in the exponent as much as possible within the constraints of our techniques; the proof can be simplified to use first rather than second moments, at the expense of a lowered constant of around $0.4$. One case where the constant can certainly be improved is where the rank $r$ (or, dually, the corank $n-r$) is constant, in which case the estimate on $|\Omega|$ can be improved by an asymptotically significant factor of $(1-\tfrac{1}{r})(1 - \tfrac{2}{r})(1 - \tfrac{3}{r})$. Carrying through this better estimate has the effect of slightly increasing the constant in the exponent further towards $0.5$, giving $0.5$ exactly when $r \le 3$, and roughly $0.498$ when $r = 4$. We complement the above enumeration result, which is based on the construction of a large family of sparse paving matroids each of which contain roughly $\frac{1}{r(n-r)}\binom{n}{r}$ circuit-hyperplanes, by a construction that shows that sparse paving Ingleton matroids with many more circuit-hyperplanes exist. We sketch the construction, which is originally due to Graham and Sloane~\cite{GrahamSloane1980}. Suppose that $0 < r < n$. The function $c\colon V(J(n,r)) \to \mathbb Z_n$ given by $c(X) = \sum_{x \in X} x \mod n$ is a proper vertex colouring of $J(n,r)$. It follows that for each $\gamma \in \mathbb Z_n$, the set $c^{-1}(\gamma) \equiv \{X : c(X) = \gamma\}$ is a stable set in $J(n,r)$ and hence $S(n,r,\gamma) := \left([n], \binom{[n]}{r}\setminus c^{-1}(\gamma)\right)$ is the sparse paving matroid whose set of circuit-hyperplanes is $c^{-1}(\gamma)$. \begin{lemma}\label{lemma:S-ingleton} $S(n,r,\gamma)$ is Ingleton. \end{lemma} \begin{proof} For the sake of contradiction, suppose that $A, B, C, D \subseteq [n]$ violate Ingleton's inequality and obtain $K, P_1, P_2, P_3, P_4$ as in Corollary~\ref{symmdiff}. Write $P_i = \{p_i, p'_i\}$. We may assume that $K \cup P_i \cup P_j$ is a circuit-hyperplane for all $\{i,j\} \in \binom{[4]}{2}\setminus\{\{3,4\}\}$, while $K \cup P_3 \cup P_4$ is a basis. Define $\gamma' = \gamma - \sum_{x \in K} x \mod n$. It follows that \[ p_1 + p'_1 + p_2 + p'_2 = p_1 + p'_1 + p_3 + p'_3 = p_2 + p'_2 + p_4 + p'_4 = \gamma', \] so in particular \[ p_3 + p'_3 + p_4 + p'_4 = \gamma', \] which implies that $c(K\cup P_3 \cup P_4) = \gamma$, contradicting that $K \cup P_3 \cup P_4$ is a basis of $S(n,r,\gamma)$. \end{proof} \begin{corollary} For all $0 < r < n$, there exists a sparse paving Ingleton matroid of rank $r$ on ground set $[n]$ that has at least $\frac{1}{n}\binom{n}{r}$ circuit-hyperplanes. \end{corollary} \begin{proof} As $\{c^{-1}(\gamma) : \gamma \in \mathbb Z_n\}$ partitions $V(J(n,r))$, there is $\gamma_0 \in \mathbb Z_n$ such that $|c^{-1}(\gamma_0)| \ge \frac{1}{n} \binom{n}{r}$. Consequently, the matroid $S(n,r,\gamma_0)$, which is Ingleton by Lemma~\ref{lemma:S-ingleton}, has at least $\frac{1}{n}\binom{n}{r}$ circuit-hyperplanes. \end{proof} \bibliographystyle{alpha}
1710.02128
\section{Introduction}\label{s:intro} Fluid-structure interaction (FSI) problems are challenging problems due to various reasons. They combine the computational challenges of (generally non-linear) fluid and structural mechanics, and they introduce new challenges, both physical and numerical, due to the coupling. If the structure is highly flexible, such as a thin membrane, large deformations can be expected. Those, in turn, have a large influence on the fluid flow. A comprehensive overview of FSI and its challenges is given by the monographs of \citet{ohayon04}, \citet{Bazilevs:2013vi} and \citet{Bazilevs:2016dg}. The classical focus in FSI problems is on solid structures. However, some structures are not solids but rather fluids or fluid-like objects. Examples are liquid menisci, soap films and lipid bilayers. Lipid bilayers surround biological cells. They are characterized by both solid-like (i.e.~elastic bending) and fluid-like behavior (i.e.~in-plane flow). Further, liquid (and solid) membranes can come into contact with surrounding objects. A classical example is a liquid droplet rolling on a substrate. The problem is characterized by fluid flow, surface tension and contact. \\ While there are various formulations available in the present literature that capture all these aspects, there is no formulation that unifies them all into a single framework. This is the objective of the present work. In doing so, we build on our recent computational work on contact, membranes, shells and fluid dynamics. The presented formulation is based on finite elements (FE) using an interface tracking technique based on a sharp interface formulation. There is a large literature body on FE-based work on membrane-FSI that is surveyed in the following. The computational approaches on interactions between fluids and membrane-like structures can be sorted into two groups. The first group deals with solid structures like elastic membranes and flexible shells, while the second group is concerned with liquid membranes and menisci. The first group can be further sorted into approaches that use surface formulations (based on shell and membrane theories) and contributions that use bulk formulations. The second group can be further sorted into approaches that only account for the shape equation in order to characterize the liquid membrane (like the Young-Laplace equation), and approaches that also account for in-plane equations (such as the surface Navier-Stokes equations). The latter case is necessary for liquid membranes that are not surrounded by a fluid, and consequently the FSI problem is due to the interplay of membrane shape and surface flow. If a surrounding medium is considered, and no-slip conditions are applied on the membrane surface, the flow within the membrane is already captured by the bulk flow, and so no further equations are needed. The method presented here is based on a surface formulation that accounts for both shape and in-plane equations. The following references deal with solid membranes using surface formulations. In \citet{1997IJNMF..24.1091L} the authors employ a deformable spatial domain space-time FEM to study the interaction of an incompressible fluid with an elastic membrane. \citet{bletzinger06} compute the flow around a tent structure using a staggered coupling between a shell code and a CFD code. \citet{Tezduyar:2007eb} review their FSI formulation based on space-time FE and introduce advancements regarding accuracy, robustness and efficiency. Benchmark examples include the inflation of a balloon, the flow through a flexible diaphragm in a tube as well as a descending parachute. Parachutes are also analyzed in \citet{karagiozis11} and \citet{Takizawa:2012il} using thin-shell formulations. \citet{Le:2009eo} developed an implicit immersed boundary method for the incompressible Navier-Stokes equations to simulate membrane-fluid interactions. Their examples include an oscillating spherical ball immersed in a fluid and the stretching of a red blood cell in a pressure driven shear flow. \citet{vanOpstal:2015ip} present a hybrid isogeometric finite-element/boundary element method for fluid-structure interaction problems of inflatable structures such as airbags and balloons. Boundary elements are also used in a recent isogeometric FSI formulation for Stokes flow around thin shells \citep{heltai16}. \\ The following references deal with solid membranes using bulk formulations. \citet{kloeppel11} numerically investigate the flow inside red blood cells (RBC) by means of monolithically coupling an incompressible fluid to a lipid bilayer represented by incompressible solid shell elements. In \citet{2016CMAME.298..520F} the authors develop a monolithic strategy for the description of purely Lagrangian FSI problems. For the solid, the FEM is used, while the fluid is discretized using the so-called Particle FEM \citep{Idelsohn:2004py}. \citet{Yang:2016gp} introduce a finite-discrete element method for bulk solids and combine the developed numerical model with a finite element multiphase flow model. Only 2D examples are considered, such as a rigid structure floating on a liquid-gaseous interface. \\ Recent reviews on computational FSI methods for solids have been given by \citet{Dowell:2001vu}, \citet{vanLoon:2007kk} and \citet{Bazilevs:2013vi}. For an introduction to immersed-boundary methods as an alternative to conforming FE discretizations we refer to \citet{Peskin:2003go}. The following references deal with liquid membranes governed only by a shape equation. \citet{Walkley:2005ui} present an arbitrary Lagrangian-Eulerian (ALE) framework for the solution of free surface flow problems including a dynamic contact line model and show its capabilities for the case of a sliding droplet. \citet{saksono06b} propose a 2D finite element formulation for surface tension and apply it to oscillating droplets and stretched liquid bridges. \citet{Montefuscolo:2014hg} introduce high-order ALE FEM schemes for capillary flows. The schemes are demonstrated on oscillating and sliding droplets accounting for varying contact angles. \\ The following references deal with liquid membranes governed by shape and in-plane equations. \citet{Barett:2015wr} present a numerical study of the dynamics of lipid bilayer vesicles. A parametric finite element formulation is introduced to discretize the surface Navier-Stokes equations. \citet{rangarajan15} introduce a spline-based finite-element formulation to compute equilibrium configurations of liquid membranes. \citet{liquidshell} present a 3D isogeometric finite element formulation for liquid membranes that accounts for the in-plane viscosity and incompressibility of the liquid. \\ A general introduction to fluid membranes and vesicles and their configurations observed in nature is given by \citet{Seifert:1997cq}. For a review on the droplet dynamics within flows, see \citet{Cristini:2004ac}. There is also earlier work on combining contact and FSI. It can be grouped into two categories: Either contact is considered between solids submerged within the fluid (e.g.~see \citet{tezduyar06,mayer10}), or contact is considered at free liquid surfaces. For liquid surfaces the same classical contact algorithms as for solid surfaces can be used \citep{droplet}. An alternative treatment of free surface contact appears naturally in the Particle FEM \citep{Idelsohn06}. Additionally, the contact behavior between liquids and solids is also governed by a contact angle and its hysteresis during sliding contact. A general computational algorithm for contact angle hysteresis is given in \citet{dropslide}. Existing work is motivated by specific examples that either focus on solid or liquid membranes. The aim of this paper therefore is to provide a new unified FSI formulation that is suitable to describe solid membranes -- such as sheets, fabrics and tissues -- liquid membranes -- such as menisci and soap films -- and membranes with both solid- and liquid-like character, like lipid bilayers. The formulation is based on a new membrane model that has been recently proposed to unify solid and liquid membranes \citep{membrane}. The membrane model readily admits general constitutive laws \citep{shelltheo}, it extends to Kirchhoff-Love shells \citep{solidshell} and it is suitable to describe the coupling with other field equations \citep{sahu17}. Further, the explicit surface formulation of the membrane provides a natural framework for free-surface contact such that any existing contact algorithm can be used. The present work considers a monolithic coupling scheme between fluid and structure, and solves the resulting non-linear system of equations with the Newton-Raphson method. Finite elements and the generalized-$\alpha$ scheme are used for the spatial and temporal discretization. The formulation uses a conforming interface discretization and an ALE formulation for the mesh motion. Compared to partitioned solvers, monolithic solvers are more complicating to implement (as they require the full tangent matrix and thus need a single code environment). But in terms of robustness, monolithic solvers are superior since the coupling between fluid and structure is fully accounted for without further approximation (beyond the usual FE discretization error). Also in terms of computational efficiency, recent works have shown that pre-conditioned monolithic solvers are competitive to partitioned ones \citep{Heil08,Kuettler10,Ha17}. For these reasons the present work uses a monolithic FSI solver. The following aspects are new in this work: \begin{itemize} \item A unified monolithic FSI formulation for liquid and solid membranes is presented. \item It includes contact on free liquid surfaces, and \item it easily extends to rotation-free shells with general constitutive behavior. \item Two simple analytical FSI examples are presented. \item The formulation is suitable for a wide range of applications, including free-surface flows, liquid menisci, flags and flexible wings. \item The examples include a flow and contact analysis of a rolling 3D droplet. \end{itemize} The remainder of this paper is structured as follows. Sec.~\ref{s:theo} presents the governing theory of incompressible fluid flow, nonlinear membranes and their coupling. The theory is used to solve two simple analytical FSI examples in Sec.~\ref{s:ana}. The computational treatment is then presented in Sec.~\ref{s:FE} using finite elements for the spatial discretization of fluid and membrane, and the generalized-$\alpha$ scheme for the temporal discretization of the coupled system. Sec.~\ref{s:ex} presents three numerical examples ranging from very low to quite large Reynolds numbers. The paper concludes with Sec.~\ref{s:concl}. \section{Governing equations}\label{s:theo} This section summarizes the governing equations for fluid flow, membrane deformation, membrane contact and their coupling. The symbols $\sF$ and $\sS$ are used to denote the fluid domain and the membrane surface, cf.~Fig.~\ref{f:infl_cyl} in Sec.~\ref{s:ana_infl} and Fig.~\ref{f:flag_ex} in Sec.~\ref{s:flag}. \subsection{Fluid flow}\label{e:theo_f} The fluid motion is described by an arbitrary Lagrangian-Eulerian (ALE) formulation. It is therefore necessary to distinguish between the material motion and the mesh motion. An ALE formulation contains the special cases of a purely Lagrangian description, for which the material and mesh motion coincide, and a purely Eulerian description, for which the mesh motion is zero. \subsubsection{Fluid kinematics}\label{s:Fkin} The material motion of a fluid particle $\bX$ within domain $\sF$ is characterized by the deformation mapping \eqb{l} \bx = \bvphi(\bX,t) \eqe and the corresponding deformation gradient (or Jacobian) \eqb{l} \bF := \ds\pa{\bvphi}{\bX}\,. \label{e:bF}\eqe The volume change during deformation is captured by the Jacobian determinant $J:=\det\bF$. The velocity of the material is given by the time derivative of $\bx$ for fixed $\bX$, written as \eqb{l} \bv := \ds\pa{\bx}{t}\Big|_{\bX} \label{e:bv}\eqe and commonly referred to as the \textit{material time derivative}. It is also often denoted by the dot notation $\bv=\dot\bx$. An important object characterizing the fluid flow is the velocity gradient \eqb{l} \bL := \nabla\bv = \ds\pa{\bv}{\bx} \label{e:bL}\eqe that can also be written as $\bL = \dot\bF\bF^{-1}$, where $\dot\bF$ is the material time derivative of the deformation gradient. The symmetric part of the velocity gradient is denoted by $\bD:=\big(\bL+\bL^\mathrm{T}\big)/2$. \\ Likewise to Eq.~\eqref{e:bv}, the material acceleration is given by \eqb{l} \ba:=\dot\bv = \ds\pa{\bv}{t}\Big|_{\bX}\,. \label{e:ba}\eqe It is related to the acceleration for fixed $\bx$, \eqb{l} \bv' := \ds\pa{\bv}{t}\Big|_{\bx}\,, \label{e:bvprime}\eqe according to \eqb{l} \dot\bv = \bv' + \bL\,(\bv-\bv_\mrm)\,, \label{e:bvdot}\eqe where $\bv_\mrm$ is the mesh velocity \citep{donea}. For a purely Lagangian description $\bv_\mrm = \bv$, while for a purely Eulerian description $\bv_\mrm = \mathbf{0}$. \textbf{Remark 2.1}: The gradient operator appearing in Eq.~\eqref{e:bL} (and likewise in Eq.~\eqref{e:bF}), is defined here as $\nabla\bv:=v_{i,j}\,\be_i\otimes\be_j$.\footnote{Following index notation, summation is implied on repeated indices. Latin indices range from 1 to 3 and refer to Cartesian coordinates. Greek indices range from 1 to 2 and refer to curvilinear surface coordinates.} In matrix notation this corresponds to the square $3\times3$ matrix $[v_{i,j}]$. \subsubsection{Fluid equilibrium} From the balance of linear momentum within $\sF$ follows the equilibrium equation \eqb{l} \divz\bsig + \bar\bff = \rho\,\dot\bv \quad $in $\sF\,, \label{e:sf_f}\eqe which governs the fluid flow together with the boundary conditions \eqb{rlll} \bv \is \bar\bv ~& $on $\partial_x\sF\,, \\[2mm] \bsig\bn = \bt \is \bar\bt ~& $on $\partial_t\sF\,. \label{e:bc_f}\eqe Here, $\bsig$ denotes the stress tensor within $\sF$, $\bt$ denotes the traction vector on the surface characterized by normal vector $\bn$, and $\rho$ denotes the fluid density, while $\bar\bff$, $\bar\bv$ and $\bar\bt$ are prescribed body forces, surface velocities and surface tractions. $\partial_x\sF$ and $\partial_t\sF$ denote the corresponding Dirichlet and Neumann boundary regions of the fluid domain $\sF$. Boundary $\partial_x\sF$ can be split into the two parts \eqb{l} \partial_x\sF = \sS \cup \partial_{\hat x}\sF\,, \eqe where $\sS$ is the surface of the membrane, which is considered to impose its velocity onto the fluid, and $\partial_{\hat x}\sF$ denotes the remaining Dirichlet boundary of the fluid domain. In order to solve PDE \eqref{e:sf_f} for $\bv(\bx,t)$, the initial condition \eqb{l} \bv(\bx,0) = \bv_0(\bx) \eqe is needed. \subsubsection{Fluid constitution} We consider an incompressible Newtonian fluid with kinematic viscosity $\nu$ and dynamic viscosity $\eta=\nu\rho$. In that case the stress tensor is given by \eqb{l} \bsig = -p\,\bone + 2\eta\,\bD\,, \eqe where $p$ is the Lagrange multiplier to the incompressibility constraint \eqb{l} g := J - 1 = 0\,, \label{e:g1}\eqe which is equivalent to the condition \eqb{l} \divz\bv = 0\,. \label{e:g2}\eqe A consequence of this condition is that the fluid pressure, defined as $-\tr\bsig/3$, is equal to the Lagrange multiplier $p$. It is an additional unknown that needs to be solved for together with $\bv$. In case of pure Dirichlet boundary conditions ($\partial_t\sF=\emptyset$), the value of $p$ needs to be specified at one point in the fluid domain in order for the pressure field to be uniquely determinable. \subsubsection{Fluid weak form}\label{s:wfF} In order to solve the problem with finite elements the strong form equations \eqref{e:sf_f}, (\ref{e:bc_f}.2) and \eqref{e:g2} are reformulated in weak form. They are therefore multiplied by the test functions $\bw$ and $q$, and integrated over the domain $\sF$. Function $\bw$ is assumed to be zero on the Dirichlet boundary $\partial_{\hat x}\sF$, but non-zero on the surface $\sS$. Functions $\bw$ and $q$ are further assumed to possess sufficient regularity for the following integrals to be well defined. In the framework of SUPG\footnote{Streamline upwind/Petrov-Galerkin \citep{Brooks82}} and PSPG\footnote{Pressure stabilizing/Petrov-Galerkin \citep{Hughes86}} stabilization, the weak form takes the form \eqb{rll} G_\sF := G_{\sF\mathrm{in}} + G_{\sF\mathrm{int}} + G_\mathrm{supg} - G_{\sF\mrs} - G_{\sF\mathrm{ext}} \is 0 \quad \forall~\bw\in\sW\,, \\[1mm] G_\sG := G_\mrg + G_\mathrm{pspg} \is 0 \quad \forall~q\in\sQ\,, \label{e:wfF}\eqe where \eqb{l} G_{\sF\mathrm{in}} := \ds\int_\sF \bw \cdot \rho\,\dot\bv\,\dif v \eqe is the virtual work associated with inertia, \eqb{l} G_{\sF\mathrm{int}} := \ds\int_\sF \nabla\bw : \bsig\,\dif v \eqe is internal virtual work, \eqb{l} G_{\sF\mrs} := \ds\int_{\sS} \bw \cdot \bt\,\dif a \eqe is the virtual work of the fluid traction $\bt=\bsig\bn$ on boundary $\sS$, \eqb{l} G_{\sF\mathrm{ext}} := \ds\int_\sF \bw \cdot \bar\bff\,\dif v + \int_{\partial_t\sF} \bw \cdot \bar\bt\,\dif a \eqe is the external virtual work\footnote{In the following examples we consider zero Neumann BC ($\bar\bt=\mathbf{0}$) and constant gravity loading with $\bar\bff=\rho\,\bg$.}, \eqb{l} G_\mrg := \ds\int_\sF q\,\divz\bv\,\dif v \eqe is the virtual work associated with incompressibility constraint \eqref{e:g2}, \eqb{l} G_\mathrm{supg} := \ds\int_\sF \tau_\mrv\,\bff_{\!\mathrm{res}} \cdot \nabla \bw\,(\bv-\bv_\mrm)\,\dif v \eqe is the SUPG term, \eqb{l} G_\mathrm{pspg} := \ds\int_\sF \tau_\mrp \nabla q \cdot \bff_{\!\mathrm{res}}\,\dif v \eqe is the PSPG term, and \eqb{l} \bff_{\!\mathrm{res}} := \rho\,\dot\bv - \divz\bsig - \bar\bff \eqe is the residual of Eq.~\eqref{e:sf_f}. Dimensionally, the residual is a force per volume. Since in theory $\bff_{\!\mathrm{res}} = \mathbf{0}$, stabilization terms $G_\mathrm{supg}$ and $G_\mathrm{pspg}$ do not affect the physical behavior of the system. In Cartesian coordinates $\bff_{\!\mathrm{res}} \cdot \nabla \bw\,(\bv-\bv_\mrm) = f_i^\mathrm{res}\,w_{i,j}\,(v_j-v_{\mrm j})$. The scalars $\tau_\mrv$ and $\tau_\mrp$ are stabilization parameters that are discussed in Sec.~\ref{s:FE}. \subsection{Deforming membranes}\label{s:theo_s} This work focuses on pure membranes that do not resist bending and out-of-plane shear. The description of those membranes is based on the formulation of \citet{membrane}, which admits both solid and liquid membranes. What follows is a brief summary. \subsubsection{Membrane kinematics} The motion of a membrane surface $\sS$ is fully described by the mapping \eqb{l} \bx = \bx(\xi^\alpha,t)\,, \label{e:bx}\eqe where $\xi^\alpha$, for $\alpha=1,2$, are curvilinear coordinates that can be associated with material points on the surface. They can be conveniently taken from the parameterization of the finite element shape functions. Based on mapping \eqref{e:bx}, the tangent vectors $\ba_\alpha:=\partial\bx/\xi^\alpha$ to surface $\sS$, the metric tensor components $a_{\alpha\beta}:=\ba_\alpha\cdot\ba_\beta$,\footnote{following the notation where $g_{ij}$ is the metric in the bulk, and $a_{\alpha\beta}$ is the metric on the surface} and the surface normal $\bn = \ba_1\times\ba_2/\sqrt{\det[a_{\alpha\beta}]}$ can be determined. From the matrix inverse $[a^{\alpha\beta}]=[a_{\alpha\beta}]^{-1}$, the dual tangent vectors $\ba^\alpha:=a^{\alpha\beta}\ba_\beta$ can be defined such that $\ba^\alpha\cdot\ba_\beta$ is equal to the Kronecker delta~$\delta^\alpha_\beta$.\\ In order to characterize deformation, a stress-free reference configuration $\sS_0$ is introduced. It will be considered here as the initial membrane surface, i.e.~$\sS_0:=\sS|_{t=0}$. In the reference configuration the tangent vectors, metric tensor components, inverse components and normal vector are denoted by capital letters, i.e.~$\bA_\alpha$, $A_{\alpha\beta}$, $A^{\alpha\beta}$ and $\bN$. The in-plane deformation of surface $\sS$ is fully characterized by the relation between $A^{\alpha\beta}$ and $a^{\alpha\beta}$. The surface stretch for instance is given by $J_\mrs := \sqrt{\det[a_{\alpha\beta}]/\det[A_{\alpha\beta}]}$. \\ Following definitions~\eqref{e:bv} and \eqref{e:ba}, the membrane velocity $\bv$ and acceleration $\ba$ are obtained from Eq.~\eqref{e:bx}. \subsubsection{Membrane equilibrium} From the balance of linear momentum within $\sS$ follows the equilibrium equation \eqb{l} (\bsig_{\!\mrs}\,\ba^\alpha)_{;\alpha} + \bff_{\!\mrs} = \rho_\mrs\,\dot\bv \quad $in $\sS\,, \label{e:sf_s}\eqe which governs the membrane deformation together with the boundary conditions \eqb{rlll} \bx \is \bar\bx & $for $\bx\in\partial_x\sS\,, \\[1mm] \bsig_{\!\mrs}\,\bnu = \bt_\mrs \is \bar\bt_\mrs & $for $\bx\in\partial_t\sS\,, \label{e:bc_s}\eqe e.g.~see \citet{shelltheo}. Here, $\bsig_{\!\mrs}$ denotes the stress tensor within $\sS$, $(...)_{;\alpha}$ denotes the covariant derivative w.r.t.~$\xi^\alpha$, $\bt_\mrs$ denotes the traction vector on the membrane boundary characterized by normal vector $\bnu$, and $\rho_\mrs$ denotes the membrane density, while $\bar\bx$ and $\bar\bt_\mrs$ are prescribed boundary velocities and boundary tractions. The body force $\bff_{\!\mrs}$ is considered here to have contributions coming from the flow field, contact and external sources, i.e. \eqb{l} \bff_{\!\mrs} = \bff_{\!\mrf} + \bff_{\!\mrc} + \bar\bff_{\!\mrs}\,. \eqe In order to solve PDE \eqref{e:sf_s} for $\bx(\xi^\alpha,t)$, the initial conditions \eqb{lll} \bx(\xi^\alpha,0) \is \bX(\xi^\alpha)\,,\\[1mm] \bv(\xi^\alpha,0) \is \bv_0(\xi^\alpha)\,, \eqe are needed. \subsubsection{Membrane constitution} For pure membranes, the stress tensor only has in-plane components, i.e.~it has the format $\bsig_{\!\mrs} = \sig^{\alpha\beta}\,\ba_\alpha\otimes\ba_\beta$. Two material models are considered in this work. The first, \eqb{l} \sig^{\alpha\beta} = \ds\frac{\mu}{J_\mrs}\bigg(A^{\alpha\beta} - \frac{1}{J^2_\mrs}\,a^{\alpha\beta}\bigg)\,, \label{e:sig_sol}\eqe is suitable for solid membranes. It can be derived from the 3D incompressible Neo-Hookean material model \citep{membrane}. The second, \eqb{l} \sig^{\alpha\beta} = \gamma\,a^{\alpha\beta}\,, \label{e:sig_liq}\eqe models isotropic surface tension, and is suitable to describe liquid membranes, e.g.~see \citet{droplet}. The parameters $\mu$ and $\gamma$ denote the shear stiffness and the surface tension, respectively. Both are considered constant here. \subsubsection{Membrane contact} This work also considers that sticking contact can occur on the membrane surface $\sS_\mrc\subset\sS$. During sticking contact no relative motion occurs between the membrane and a neighboring substrate surface $\sS_\mathrm{sub}$. Mathematically this corresponds to the constraint \eqb{l} \bg = \mathbf{0}\quad\forall\,\bx\in\sS_\mrc\,, \label{e:bgc}\eqe where \eqb{l} \bg(\bx) = \bx-\bx^0_\mrp \eqe denotes the contact gap between the membrane point $\bx\in\sS_\mrc$ and its initial projection point on the substrate surface, $\bx_\mrp^0\in\sS_\mathrm{sub}$, i.e.~$\bx^0_\mrp$ is the location where $\bx$ initially touched $\sS_\mathrm{sub}$. Here, constraint \eqref{e:bgc} will be enforced by a penalty regularization. For this, the contact traction at $\bx\in\sS$ is given by \eqb{l} \bff_{\!\mrc} = \left\{\begin{array}{ll} -\epsilon\,\bg & $if $\bg\cdot\bn_\mrc < 0\,, \\[1mm] \mathbf{0} & $else$\,, \end{array}\right. \label{e:fc}\eqe where $\bn_\mrc$ is the surface normal of $\sS_\mathrm{sub}$. Instead of the penalty formulation, also any other contact formulation can be used to enforce \eqref{e:bgc}. Further details on large deformation contact theory can be found in the textbooks of \citet{laursen} and \citet{wriggers-contact}. \subsubsection{Membrane weak form} In order to employ finite elements, the strong form equations \eqref{e:sf_s} and (\ref{e:bc_s}.2) are reformulated in weak from. As shown in \citet{shelltheo}, the weak form for the membrane can be written as \eqb{l} G_\sS := G_{\sS\mathrm{in}} + G_{\sS\mathrm{int}} + G_\mrc - G_{\sS\mrf} - G_{\sS\mathrm{ext}} = 0\quad\forall~\bw\in\sW\,, \label{e:wfS}\eqe with the virtual work contributions \eqb{rll} G_{\sS\mathrm{in}} \dis \ds\int_\sS\bw\cdot\rho_\mrs\,\dot\bv\,\dif a\,, \\[4mm] G_{\sS\mathrm{int}} \dis \ds\int_{\sS}\sig^{\alpha\beta}\,\bw_{;\alpha}\cdot\ba_\beta\,\dif a\,, \\[4mm] G_\mrc \dis-\ds\int_\sS\bw\cdot\bff_{\!\mrc}\,\dif a \,, \\[4mm] G_{\sS\mrf} \dis \ds\int_\sS\bw\cdot\bff_{\!\mrf}\,\dif a \,, \\[4mm] G_{\sS\mathrm{ext}} \dis \ds\int_\sS\bw\cdot\bar\bff_{\!\mrs}\,\dif a + \int_{\partial_t\sS} \bw\cdot\bar\bt_\mrs\,\dif s\,, \label{e:Gicfe}\eqe due to inertia, internal forces, contact forces, fluid forces and external forces acting on $\sS$ and $\partial_t\sS$. Test function $\bw$ is the same as in \eqref{e:wfF}. Therefore, space $\sW$ needs to additionally satisfy the requirement that all integrals appearing above are well defined. Further $\bw$ is assumed to be zero on $\partial_x\sS$. \\ Pure membranes are inherently unstable in the quasi-static case ($\bv=\dot\bv=\mathbf{0}$) and therefore need to be stabilized \citep{membrane,droplet}. Here, no stabilization is required as the fluid forces $\bff_\mrf$ stabilize the membrane, even when $\rho_\mrs=0$ (as is considered in some of the following examples). In the numerical examples following later, $\bar\bff_{\!\mrs}$ and $\bar\bt_\mrs$, and consequently $G_{\sS\mathrm{ext}}$, are considered zero. \textbf{Remark 2.2}: It is straight forward to extend weak form~\eqref{e:wfS} to Kirchhoff-Love shells: $G_{\sS\mathrm{int}}$ and $G_{\sS\mathrm{ext}}$ simply need to be extended by the bending moments acting within $\sS$ and on $\partial\sS$, e.g.~see \citet{solidshell}. Kirchhoff-Love shells are suitable for thin membrane-like surface structures. Such a structure is considered in Sec.~\ref{s:flag} using isogeometric finite elements. \subsection{Coupling conditions} The membrane deformation $\bx$ moves the fluid such that \eqb{l} \bv = \dot\bx~~$on $\sS \label{e:coupx}\eqe is a Dirichlet BC for the fluid. This choice assumes no tangential slip between membrane and fluid. In response, the flow exerts a traction on the membrane such that \eqb{l} \bff_{\!\mrf} = -\bt~~$on $\sS \label{e:coupt}\eqe is a `body force' of the membrane. Eq.~\eqref{e:coupx} is the kinematic coupling condition between the two domains, while Eq.~\eqref{e:coupt} is the kinetic coupling condition. If the membrane is surrounded by fluid on both sides, $\bt$ in \eqref{e:coupt} is replaced by the traction jump $[\![\bt]\!]:=\bt^+-\bt^-$, where $\bt^+$ is the traction on the front side (with outward normal $\bn$) and $\bt^-$ is the traction on the back side (with outward normal $-\bn$) of the membrane. The combined FSI problem is then characterized by the two governing equations \eqb{lll} G_\sF + G_\sS \is 0\quad\forall~\bw\in\sW\,, \\[1mm] G_\sG \is 0\quad\forall~q\in\sQ\,, \label{e:wf}\eqe which can be solved for the unknown velocity $\bv$ and pressure $p$ in $\sF$. The membrane deformation can then be obtained from integrating $\bv$. Coupling condition \eqref{e:coupt} simply leads to the cancelation of terms $G_{\sF\mrs}$ and $G_{\sS\mrf}$ in the combined weak form \eqref{e:wf}. This cancelation will carry over to the discretized weak form, as long as surface $\sS$ is discretized conformingly on the fluid and membrane side. \section{Analytical examples}\label{s:ana} This section presents the analytical solution of two simple examples. They serve as verification examples for the computational implementation discussed later. \subsection{Solid membrane example: Fluid-inflated cylinder}\label{s:ana_infl} As a first example we consider the radial inflation of a membrane cylinder due to a constant radial inflow as is illustrated in Fig.~\ref{f:infl_cyl}. \begin{figure}[h] \begin{center} \unitlength1cm \begin{picture}(0,5) \put(-4.8,-.2){\includegraphics[height=50mm]{cfigs/infl_cyl.jpg}} \end{picture} \caption{Fluid-inflated cylinder: Membrane deformation $\sS_0\rightarrow\sS$ and fluid velocity $v(r)$ due to a radial inflow at $R_\mathrm{in}$.} \label{f:infl_cyl} \end{center} \end{figure} The example is chosen since it can be fully solved analytically and thus used for verification of the computational formulation, which is then considered in Sec.~\ref{s:ex1}. Given the inflow velocity $v_\mathrm{in}$ at the inner boundary $R_\mathrm{in}$, the radial fluid velocity at location $r$ is given by \eqb{l} v(r) = \ds\frac{v_\mathrm{in}\, R_\mathrm{in}}{r} \label{e:ana1_v} \eqe due to continuity. Since $v=\dot r$, we obtain \eqb{l} r(R,t) = \sqrt{R^2+2v_\mathrm{in}\,R_\mathrm{in}\,t}\,, \label{e:ana1_r} \eqe as the current position of the fluid particle initially at $R$. The current membrane position is thus given by $r_\mrs=r(R_\mrs,t)$, where $R_\mrs$ is the initial position of the membrane. In vectorial notation, the flow field can thus be characterized by the position, velocity and acceleration \eqb{lll} \bx(R,t) \is r\,\be_r\,, \\[1mm] \bv(R,t) \is v\,\be_r\,, \\[1mm] \ba(R,t) \is -\ds\frac{v^2}{r}\,\be_r\,, \eqe where $\be_r=\cos\theta\,\be_1+\sin\theta\,\be_2$ is the radial unit vector. From this follows \eqb{l} \bD = \displaystyle \frac{v}{r} \big( \bar\bone - 2\, \boldsymbol{e}_r \otimes\boldsymbol{e}_r \big)\,,\quad \eqe with the 2D identity $\bar\bone := \be_1\otimes\be_1 + \be_2\otimes\be_2$, such that $\divz\bD=\mathbf{0}$. The equation of motion thus reduces to $-\nabla p=\rho\,\ba$, which can be integrated to give the pressure field \eqb{l} p(R,t) = p_\mrs + \ds\frac{\rho}{2}\big(v_\mrs^2-v^2\big)\,, \label{e:ana1_p} \eqe where $v_\mrs=v(r_\mrs)$ is the current membrane velocity, and $p_\mrs$ is the pressure acting on the membrane. Neglecting membrane inertia, this pressure equilibrates the membrane stress \eqb{l} \sigma = \mu\,\bigg(\lambda - \ds\frac{1}{\lambda^3} \bigg) \label{e:ana1_sig} \eqe caused by the membrane stretch $\lambda=r_\mrs/R_\mrs$ according to Eq.~(\ref{e:sig_sol}); see Appendix~\ref{s:ana_mem}. From $p_\mrs = \sigma/r_\mrs$ follows \eqb{l} p_\mrs = \ds\frac{\mu}{R_\mrs} \left [1-\left(\frac{R_\mrs}{r_\mrs}\right)^4 \right]\,. \eqe \subsection{Liquid membrane example: Spinning droplet}\label{s:ana_spin} As a second example we consider a spinning droplet. This example is considered for comparison with the computational example of a rolling droplet in Sec.~\ref{s:ex2}. At very small length scales the influence of gravity is negligible, so that a rolling droplet remains approximately spherical. Considering the axis of rotation to be $\be_2$, the motion of a spinning droplet can be expressed as \eqb{l} \bx(r,t) = r\,\be_r\,, \eqe where $\be_r=\cos\theta\,\be_1 - \sin\theta\,\be_3$, $\theta = \omega t$ and $\omega$ denotes the angular velocity around $\be_2$. Consequently, \eqb{lll} \bv(r,t) \is \omega\,r\,\be_\theta\,, \\[1mm] \ba(r,t) \is -\omega^2 r\,\be_r\,, \eqe where $\be_\theta=-\sin\theta\,\be_1 - \cos\theta\,\be_3$. Since we can write $x_1=r\cos\theta$ and $x_2=-r\sin\theta$, we find $\nabla\bv=\omega(\be_1\otimes\be_3-\be_3\otimes\be_1)$ such that $\bD=\mathbf{0}$ and \eqb{l} \bsig = -p\,\bone\,. \eqe The spin tensor, defined as $\bW:=\big(\bL-\bL^T\big)/2$, then becomes $\bW=\bL=\nabla\bv$. The axial vector of $\bW$, denoted by $\bome$, thus is $\bome =\omega\,\be_2$. It denotes the orientation and magnitude of the droplet's spin, and it is equal to half of the vorticity $\nabla\times\bv$. Solving Eq.~\eqref{e:sf_f} (with $\bar\bff=\mathbf{0}$) for $p$ now gives \eqb{l} p(r) = \ds\omega^2\rho\frac{r^2}{2} + p_0\,. \eqe The constant $p_0$ follows from the boundary condition $p(r_0)=2\gamma/r_0$, where $\gamma$ is the surface tension of the droplet and $r_0$ is the droplet radius. This condition enforces the Young-Laplace equation, which is contained inside Eq.~\eqref{e:sf_s}, see \citet{droplet}. Applying the boundary condition, we find \eqb{l} p(r) = \ds\frac{2\gamma}{r_0} - \frac{\rho\,\omega^2}{2}\big(r_0^2-r^2\big)\,. \eqe If desired, the constant velocity $\bv_0=\omega\,r_0\,\be_1$ can be added to $\bv(r,t)$, such that the resulting velocity is zero at the contact point (where $\theta=\pi/2$). \section{Finite element formulation}\label{s:FE} The coupled fluid-membrane problem of Sec.~\ref{s:theo} is solved with the finite element method using the generalized-$\alpha$ scheme. This section presents the required discretization steps and the resulting algebraic equations. \subsection{Spatial discretization} The computational domain is discretized into $n_\mathrm{el}$ finite elements, numbered $e=1,...,n_\mathrm{el}$. Some of these elements are 3D fluid elements, others are 2D surface elements or 1D line elements. Element $e$ contains $n_e$ nodes and occupies the domain $\Omega_e$ in the current configuration. Each fluid element has four degrees-of-freedom (dofs) per node (three velocity components and a pressure), while the membrane elements each have three unknown displacements per node. Each fluid element therefore contributes $4n_e$ force components, while each membrane element contributes $3n_e$ force components that need to be assembled into the global system. Those elemental forces are discussed in the following two sections. \subsubsection{Fluid flow} \textit{4.1.1.1 Basic flow variables} Within a fluid element, the fluid velocity is approximated by the interpolation \eqb{l} \bv \approx \bv^h = \ds\sum_{I=1}^{n_e}N_I\,\bv_I \,, \eqe where $N_I$ and $\bv_I$ are the nodal shape function and nodal velocity, respectively. In short, this can also be written as \eqb{l} \bv \approx \bv^h = \mN\,\mv_e \,, \label{e:bvh}\eqe where $\mN:=[N_1\bone,\,N_2\bone,\,...,\,N_{n_e}\bone]$ and $\mv_e := [\bv_1,\,\bv_2,\,...,\,\bv_{n_e}]^\mathrm{T}$. The corresponding test function (or variation) is approximated in the same fashion, i.e. \eqb{l} \bw \approx \bw^h = \mN\,\mw_e \,. \label{e:bwh}\eqe The fluid pressure is approximated by the interpolation \eqb{l} p \approx p^h = \tilde\mN\,\mpp_e \,, \eqe where $\tilde\mN:=[N_1,\,N_2,\,...,\,N_{n_e}]$. Likewise, \eqb{l} q \approx q^h = \tilde\mN\,\mq_e \,. \eqe The structure of \eqref{e:bvh} is also used to interpolate the mesh motion, i.e. \eqb{l} \bv_\mrm \approx \bv^h_\mrm = \mN\,\mv_{\mrm e} \,. \label{e:bvm}\eqe In the present work, the $\mv_{\mrm e}$ are not treated as unknowns. Instead they will be defined through the membrane motion. \textit{4.1.1.2 Derived flow variables} As a consequence of the above expressions, we find the approximation of the acceleration (from Eq.~\eqref{e:bvdot}) \eqb{l} \dot\bv \approx \dot\bv^h = \mN\,\mv'_e + \bL\mN\big(\mv_e-\mv_{\mrm e}\big)\,, \eqe the velocity gradient \eqb{l} \bL \approx \bL^h = \ds\sum_{I=1}^{n_e} \bv_I \otimes \nabla N_I\,, \eqe the pressure gradient \eqb{l} \nabla p \approx \nabla p^h = \mG\,\mpp_e\,, \eqe and the velocity divergence \eqb{l} \divz\bv \approx \divz\bv^h = \mD\,\mv_e\,, \eqe where \eqb{l} \nabla N_I = \left[\begin{matrix} N_{I,1} \\ N_{I,2} \\ N_{I,3} \end{matrix}\right], \eqe $\mG := [\nabla N_1,\,\nabla N_2,\,...,\,\nabla N_{n_e}]$ and $\mD := [(\nabla N_1)^\mathrm{T},\,(\nabla N_2)^\mathrm{T},\,...,\,(\nabla N_{n_e})^\mathrm{T}]$. Further, we introduce the classical B-matrix $\mB := [\mB_1,\,\mB_2,\,...,\,\mB_{n_e}]$, with \eqb{l} \mB_I := \left[\begin{matrix} N_{I,1} & 0 & 0\\ 0 & N_{I,2} & 0 \\ 0 & 0 & N_{I,3} \\ 0 & N_{I,3} & N_{I,2}\\ N_{I,3} & 0 & N_{I,1} \\ N_{I,2} & N_{I,1} & 0 \end{matrix}\right], \eqe in order to express the symmetric velocity gradient and its corresponding variation in Voigt notation (indicated by index `v') as \eqb{rllll} \nabla^s\bv_\mrv \ais \nabla^s\bv_\mrv^h \is \mB\,\mv_e\,, \\[1mm] \nabla^s \bw_\mrv \ais \nabla^s\bv_\mrw^h \is \mB\,\mw_e\,, \eqe i.e.~arranged as $\nabla^s\bv_\mrv := [v_{1,1},\,v_{2,2},\,v_{3,3},\,v_{2,3}+v_{3,2},\,v_{1,3}+v_{3,1},\,v_{1,2}+v_{2,1}]^\mathrm{T}$. The stress tensor, arranged as $\sig_\mrv:=[\sig_{11},\,\sig_{22},\,\sig_{33},\,\sig_{23},\,\sig_{13},\,\sig_{12}]$, can thus be written as \eqb{l} \bsig_\mrv \approx \bsig_\mrv^h = \bbC\,\mB\,\mv_e- \bone_\mrv\,\tilde\mN\,\mpp_e\,, \eqe with $\bbC:=\mathrm{diag}(2\eta\bone,\,\eta\bone)$ and $\bone_\mrv = [1,\,1,\,1,\,0,\,0,\,0]^\mathrm{T}$. Here, $\bone$ is the usual identity tensor in $\bbR^3$. Due to the symmetry of the stress and since $\mB^\mathrm{T}\bone_\mrv = \mD^\mathrm{T}$, the integrand of $G_{\sF\mathrm{int}}$ becomes \eqb{l} \nabla\bw^h:\bsig^h = \mw_e^\mathrm{T}\,\mB^\mathrm{T}\,\bbC\,\mB\,\mv_e - \mw_e^\mathrm{T}\,\mD^\mathrm{T}\,\tilde\mN\,\mpp_e \eqe within element $\Omega^e$. \\ In order to represent the SUPG term, we introduce the arrays $\mB_\mrf := [\mB_{\mrf1},\,\mB_{\mrf2},\,...,\,\mB_{\mrf n_e}]$, with the $3\times 3$ blocks \eqb{l} \mB_{\mrf I} := \nabla N_I\otimes\bff_{\!\mathrm{res}}\,, \eqe and $\mB_\mrv := [B_{\mrv1}\bone,\,B_{\mrv2}\bone,\,...,\,B_{\mrv n_e}\bone]$, with \eqb{l} B_{\mrv I} := \nabla N_I\cdot(\bv-\bv_\mrm)\,. \eqe The last term can also be used to rewrite the $\bL(\bv-\bv_\mrm)$ term as \eqb{l} \bL^h\,(\bv^h-\bv^h_\mrm) = \mB_\mrv\mv_e\,. \eqe \textit{4.1.1.3 Weak form contribution of a fluid element} Given the above expressions, the contributions from element $\Omega^e$ to the fluid weak form \eqref{e:wfF} can be written as \eqb{l} G_\sF^e+G^e_\sG = \mw_e^\mathrm{T}\,\mf^e_\sF+\mq_e^\mathrm{T}\mg^e\,, \label{e:GF}\eqe with the ($3n_e\times1$) FE force vector \eqb{l} \mf^e_\sF := \left\{ \begin{array}{ll} \mf^e_{\sF\mathrm{in}}+\mf^e_{\sF\mathrm{int}}+\mf^e_\mathrm{supg}-\mf^e_{\sF\mathrm{ext}\bar f} ~& $for $\Omega^e\subset\sF^h\,, \\[2mm] -\mf^e_{\sF\mathrm{ext}\bar t} ~& $for $\Omega^e\subset\partial_t\sF^h\,, \\[2mm] -\mf^e_{\sF\mrs} ~& $for $\Omega^e\subset\sS^h\,, \end{array}\right. \label{e:f_eFd} \eqe and the ($n_e\times1$) FE pseudo force vector \eqb{l} \mg^e := \mg^e_\mrg + \mg^e_\mathrm{pspg}\,. \eqe They are composed of the FE forces \eqb{lll} \mf^e_{\sF\mathrm{in}} \dis \mm_e\,\mv'_e + \mf^e_\mathrm{con}\,, \\[3mm] \mf^e_\mathrm{con} \dis \ds\int_{\Omega^e}\rho\,\mN^\mathrm{T}\mB_\mrv\mv_e\,\dif v\,, \\[4mm] \mf^e_{\sF\mathrm{int}} \dis \mcc_e\,\mv_e - \md_e\,\mpp_e\,, \\[3mm] \mf^e_\mathrm{supg} \dis \ds\int_{\Omega^e} \tau_\mrv\,\mB^\mathrm{T}_\mrf(\bv-\bv_\mrm)\,\dif v = \int_{\Omega^e} \tau_\mrv\,\mB^\mathrm{T}_\mrv\bff_{\!\mathrm{res}}\,\dif v\,, \\[4mm] \mf^e_{\sF\mrs} \dis \ds\int_{\Omega^e}\mN^\mathrm{T}\,\bt\,\dif a \,, \\[4mm] \mf^e_{\sF\mathrm{ext}\bar f} \dis \ds\int_{\Omega^e}\mN^\mathrm{T}\,\bar\bff\,\dif v\, \\[4mm] \mf^e_{\sF\mathrm{ext}\bar t} \dis \ds\int_{\Omega^e}\mN^\mathrm{T}\,\bar\bt\,\dif a\,, \label{e:f_eF}\eqe the FE pseudo forces \eqb{lll} \mg^e_\mrg \dis \md_e^\mathrm{T}\,\mv_e\,, \\[3mm] \mg^e_\mathrm{pspg} \dis \ds\int_{\Omega^e}\tau_\mrp\,\mG^\mathrm{T}\bff_{\!\mathrm{res}}\,\dif v\,, \label{e:f_eG}\eqe and the elemental mass, damping and pressure-force matrices \eqb{lll} \mm_e \dis \ds\int_{\Omega^e}\rho\,\mN^\mathrm{T}\mN\,\dif v\,, \\[4mm] \mcc_e \dis \ds\int_{\Omega^e}\mB^\mathrm{T}\,\bbC\,\mB\,\dif v\,, \\[4mm] \md_e \dis \ds\int_{\Omega^e}\mD^\mathrm{T}\tilde\mN\,\dif v\,. \label{e:mcd_e}\eqe The tangent matrices of $\mf^e_\sF$ and $\mg^e$, needed for linearization, can be found in Appendix~\ref{s:FE_kF}. \textbf{Remark 4.1}: One may simply change the sign of both $\mg^e_\mrg$ and $\mg^e_\mathrm{pspg}$ in order to highlight the symmetry between the second part of $\mf^e_{\sF\mathrm{int}}$ and $\mg^e_\mrg$. \textit{4.1.1.4 Stabilization terms} In order to evaluate the residual $\bff_{\!\mathrm{res}}$ that appears in the stabilization terms $\mf^e_\mathrm{supg}$ and $\mg^e_\mathrm{pspg}$, we note that \eqb{l} 2\,\divz\bD^h = (v^h_{j,ij} + v^h_{i,jj})\,\be_i = (\mG^2+\mH)\,\mv_e\,, \eqe where $\mG^2 := [\mG^2_1,\,\mG^2_2,\,...,\,\mG^2_{n_e}]$, with \eqb{l} \mG^2_I := \nabla(\nabla N_I) = \left[\begin{matrix} N_{I,11} & N_{I,12} & N_{I,13} \\ N_{I,21} & N_{I,22} & N_{I,23} \\ N_{I,31} & N_{I,32} & N_{I,33} \end{matrix}\right] \eqe and $\mH := [H_1\bone,\,H_2\bone,\,...,\,H_{n_e}\bone]$, with \eqb{l} H_I := \tr\mG^2_I = N_{I,11} + N_{I,22} + N_{I,33}\,. \eqe With this we can write \eqb{l} \divz\bsig^h = \eta\,\mF\,\mv_e - \mG\,\mpp_e\,, \eqe where $\mF=\mG^2+\mH$. Thus we obtain \eqb{l} \bff_\mathrm{\!res} \approx \bff^h_\mathrm{\!res} = \rho\,\mN\,\mv'_e + \rho\,\mB_\mrv\mv_e - \eta\,\mF\,\mv_e + \mG\,\mpp_e - \bar\bff\,. \eqe The stabilization parameters $\tau_\mrv$ and $\tau_\mrp$ appearing inside $\mf^e_\mathrm{supg}$ and $\mg^e_\mathrm{pspg}$ are computed from \eqb{l} \tau_\mrv = \tau_\mrp = \ds\Bigg[ \bigg(\frac{2}{\Delta t}\bigg)^2 + \bigg(\frac{2\norm{\bv}}{m_e\,h_e}\bigg)^2 +\bigg(\frac{4\nu}{m_e\,h_e^2}\bigg)^2\Bigg]^{-\frac{1}{2}} \label{e:tau_vp}\eqe \citep{shakib,tezduyar92,ESEflow}, where $\Delta t$ is the time step size, $h_e$ is the ``element length" in the local flow direction taken from \eqb{l} \ds\frac{1}{h_e} = \frac{1}{2}\sum_{I=1}^{n_e}\bigg|\nabla N_I\cdot\frac{\bv}{\norm{\bv}}\bigg| \label{e:h_e} \eqe \citep{tezduyar92} and $m_e$ depends on the polynomical order of the shape functions. I.e.~for L1 (linear Lagrange) and L2 (quadratic Lagrange) elements we have $m_e=1/3$ and $m_e=1/12$, respectively.\footnote{In Eqs.~\eqref{e:tau_vp} and \eqref{e:h_e}, $\bv$ is taken from the previous time step in order to avoid the linearization of $\tau_\mrv$ and $\tau_\mrp$.} According to this, parameters $\tau_\mrv$ and $\tau_\mrp$ are local parameters that change from quadrature point to quadrature point. \textit{4.1.1.5 Transformation of derivatives} In the above expressions $\nabla N_I$ denotes the gradient w.r.t.~the current configuration $\bx$, which is discretized by $\bx^h=\sum_IN_I\,\mx_{\mrm I}$, where $\mx_{\mrm I}$ are the nodal positions of the FE mesh. Since it is convenient to define the shape functions on a master element in $\bxi=[\xi,\,\eta,\,\zeta]^\mathrm{T}$ space, where $\partial N_I/\partial\bxi$ is easily obtained, $\nabla N_I$ needs to be determined from \eqb{l} \nabla N_I = \ds\pa{N_I}{\bx} = \bj^{-\mathrm{T}}\,\pa{N_I}{\bxi}\,, \label{e:N,x}\eqe where \eqb{l} \bj = \ds\pa{\bx^h}{\bxi} = \ds\sum_{I=1}^{n_e}\bx_{\mrm I}\otimes\pa{N_I}{\bxi} \eqe denotes the Jacobian of the mapping $\bxi\rightarrow\bx$. Likewise, the second derivative $\mG^2_I=\nabla(\nabla N_I)$ is obtained from the formula \eqb{l} \mG^2_I = \ds\paqq{N_I}{\bx}{\bx} = \bj^{-\mathrm{T}}\Bigg[\sum_{J=1}^{n_e}\Big(\delta_{IJ} - \nabla N_I\cdot\bx_{\mrm J}\Big)\,\paqq{N_J}{\bxi}{\bxi} \Bigg]\,\bj^{-1} \label{e:N,xx}\eqe that follows from differentiating \eqref{e:N,x}. Eq.~\eqref{e:N,xx} is equivalent to the expression given in \citet{dhatt}. \subsubsection{Membrane deformation} Following the notation of Eq.~\eqref{e:bvh}, the reference position and the current position within a membrane element are approximated by the interpolations \eqb{lllll} \bX \ais \bX^h \is \mN\,\mX_e\,, \\[1mm] \bx \ais \bx^h \is \mN\,\mx_e\,, \label{e:bxh}\eqe where $\mX_e$ and $\mx_e$ are arranged just like $\mv_e$. From this follows \eqb{lllll} \bA_\alpha \ais \bA^h_\alpha \is \mN_{,\alpha}\,\mX_e\,, \\[1mm] \ba_\alpha \ais \ba^h_\alpha \is \mN_{,\alpha}\,\mx_e\,, \eqe where $\mN_{,\alpha}:=[N_{1,\alpha}\bone,\,N_{2,\alpha}\bone,\,...,\,N_{n_e,\alpha}\bone]$. Likewise, \eqb{l} \bw_{,\alpha} \approx \bw^h_{,\alpha} = \mN_{,\alpha}\,\mw_e \eqe follows from Eq.~\eqref{e:bwh}. Given $\bA_\alpha$ and $\ba_\alpha$, the metric tensor components $A^{\alpha\beta}$ and $a^{\alpha\beta}$ can be determined and the stress can be evaluated as discussed in Sec.~\ref{s:theo_s}. Inserting the discretized expressions for $\dot\bv$, $\ba_\alpha$, $\bw$ and $\bw_\alpha$ into the membrane weak form \eqref{e:wfS} yields the elemental weak form contribution \eqb{l} G_\sS^e = \mw_e^\mathrm{T}\,\mf^e_\sS\,, \label{e:GS}\eqe with the ($3n_e\times1$) FE force vector \eqb{l} \mf^e_\sS := \left\{ \begin{array}{ll} \mf^e_{\sS\mathrm{in}}+\mf^e_{\sS\mathrm{int}}+\mf^e_\mrc-\mf^e_{\sS\mrf}-\mf^e_{\sS\mathrm{ext}\bar f} ~& $for $\Omega^e\subset\sS^h\,, \\[2mm] -\mf^e_{\sS\mathrm{ext}\bar t} & $for $\Omega^e\subset\partial_t\sS^h\,, \end{array}\right. \eqe that is composed of \eqb{lll} \mf_{\sS\mathrm{in}}^e \dis \ds\int_{\Omega^e} \rho_\mrs\,\mN^\mathrm{T}\mN\,\dif v~\dot\mv_e\,, \\[4mm] \mf_{\sS\mathrm{int}}^e \dis \ds\int_{\Omega^e}\sig^{\alpha\beta}\,\mN^\mathrm{T}_{,\alpha}\,\mN_{,\beta}\,\dif a~\mx_e\,,\\[4mm] \mf^e_{\mrc} \dis - \ds\int_{\Omega^e} \mN^\mathrm{T}\,\bff_{\!\mrc}\,\dif a\,, \\[4mm] \mf_{\sS\mrf}^e \dis \ds\int_{\Omega^e} \mN^\mathrm{T}\,\bff_{\!\mrf}\,\dif a\,, \\[4mm] \mf_{\sS\mathrm{ext}\bar f}^e \dis \ds\int_{\Omega^e} \mN^\mathrm{T}\,\bar\bff_{\!\mrs}\,\dif a\,, \\[4mm] \mf_{\sS\mathrm{ext}\bar t}^e \dis \ds\int_{\Omega^e}\mN_\mrt^\mathrm{T}\,\bar\bt_\mrs\,\dif s\,. \label{e:f_icfe}\eqe Using a quadrature-point-based contact formulation, the discretization of the contact traction $\bff_{\!\mrc}$ is straight forward (expression \eqref{e:fc} is simply evaluated at each quadrature point), but an active set strategy needs to be implemented in order to handle the state changes between contact and no contact \citep{wriggers-contact}.\\ The tangent matrix of $\mf^e_\sS$, needed for the linearization, can be found in Appendix~\ref{s:FE_kS}. \subsubsection{Coupled system} Combining contributions \eqref{e:GF} and \eqref{e:GS} yields the coupled weak form \eqb{l} G^e = \mw_e^\mathrm{T}\,\mf^e + \mq_e^\mathrm{T}\,\mg^e \,, \label{e:G}\eqe with the ($3n_e\times1$) FE force vector \eqb{l} \mf^e := \mf^e_\sF + \mf^e_\sS = \left\{ \begin{array}{ll} \mf^e_{\sF\mathrm{in}}+\mf^e_{\sF\mathrm{int}}+\mf^e_\mathrm{supg}-\mf^e_{\sF\mathrm{ext}\bar f} ~& $for $\Omega^e\subset\sF^h\,, \\[2mm] -\mf^e_{\sF\mathrm{ext}\bar t} & $for $\Omega^e\subset\partial_t\sF^h\,, \\[2mm] \mf^e_{\sS\mathrm{in}}+\mf^e_{\sS\mathrm{int}}+\mf^e_\mrc-\mf^e_{\sS\mathrm{ext}\bar f} ~& $for $\Omega^e\subset\sS^h\,, \\[2mm] -\mf^e_{\sS\mathrm{ext}\bar t} & $for $\Omega^e\subset\partial_t\sS^h\,. \end{array}\right. \eqe It can be seen that for a conforming FE discretization of surface $\sS$, such as is considered here, coupling condition \eqref{e:coupt} implies that the force vector $\mf^e_{\sS\mrf}$ of a membrane element cancels exactly with $\mf^e_{\sF\mrs}$ of the corresponding fluid boundary element. In the coupled system, both $\mf^e_{\sS\mrf}$ and $\mf^e_{\sF\mrs}$ therefore do not appear anymore. \subsubsection{Double pressure nodes}\label{s:2xp} Since the membrane is described here as a 2D surface that is discretized by 2D surface finite elements, the membrane nodes carry a special role. Unless the membrane is located at the boundary of the fluid, it is surround by fluid on both sides and generally supports pressure jumps. A finite element node on $\sS^h$ therefore must carry two pressure dofs. One for each side of the membrane. Otherwise, the formulation does not properly account for pressure jumps. This is especially important for flexible membranes, where pressure jumps tend to become large. In practice, each FE node on $\sS^h$ that is not located at boundary $\partial\sS^h$ (where both fluid sides connect), is assigned two pressure dofs.\footnote{\citet{Tezduyar:2007eb} propose to also use double pressure dofs at the boundary of $\partial\sS^h$ in order to provide additional numerical stability.} When the elemental connectivity is then set up, care has to be taken in order to connect the element on each side of $\sS^h$ with the correct dofs. \\ As long as a no-slip condition is considered on both sides of $\sS$, as is done here, the velocity field is continuous across $\sS$ and no extra velocity degrees of freedom are needed on $\sS^h$. \subsection{Temporal discretization}\label{s:temp} The elemental force vectors $\mf^e$ and $\mg^e$ are assembled into the global vectors \eqb{l} \mf = \mf_{\sF\mathrm{in}} + \mf_{\sS\mathrm{in}} + \mf_{\sF\mathrm{int}}+\mf_{\sS\mathrm{int}}+\mf_\mrc+\mf_\mathrm{supg}-\mf_\mathrm{ext} \eqe and \eqb{l} \mg = \mg_\mrg+\mg_\mathrm{pspg}\,, \eqe where $\mf_\mathrm{ext}:=\mf_{\sF\mathrm{ext}\bar f}+\mf_{\sF\mathrm{ext}\bar t}+\mf_{\sS\mathrm{ext}\bar f}+\mf_{\sS\mathrm{ext}\bar t}$. The former can be written as $\mf=[\mf_\mathrm{br}^\mathrm{T},\,\mf_\mrr^\mathrm{T}]^\mathrm{T}$, where $\mf_\mathrm{br}$ are the boundary reactions of the nodes on $\partial_{\hat x}\sF$ and $\partial_x\sS$, and $\mf_\mrr$ are the residual forces of all the remaining nodes. Accordingly, the global residual vector \eqb{l} \mr := \left[\begin{matrix} \mf_\mrr \\[1mm] \mg \end{matrix}\right], \eqe can be defined. The finite element forces are in equilibrium if $\mr=\mathbf{0}$. In general, $\mr=\mathbf{0}$ is a coupled system of ordinary differential equations for the unknown nodal positions $\mx := [\bx_I]$, velocities $\mv:=[\bv_I]$, accelerations $\ma:=[\bv'_I]$ (for fixed $\mx$) and pressures $\mpp:=[p_I]$, for $I=1,...,n_\mathrm{no}$, that are all functions of time. The generalized-$\alpha$ scheme \citep{chung93,jansen99,cottrell} is used to discretize $\mr=\mathbf{0}$ in time. Instead of solving for the functions $\mx(t)$, $\mv(t)$, $\ma(t)$ and $\mpp(t)$, the approximations $\mx^n\approx\mx(t_n)$, $\mv^n\approx\mv(t_n)$, $\ma^n\approx\ma(t_n)$ and $\mpp^n\approx\mpp(t_n)$ are determined at discrete time steps $t_n$, $n=0,...,n_t$. This is based on the Newmark update formulas for step $t_n\rightarrow t_{n+1}$ \eqb{lll} \mx^{n+1} \is \mx^n + \Delta t\,\mv^n + \ds\frac{\Delta t^2}{2}\,\big((1-2\beta)\,\ma^n + 2\beta\,\ma^{n+1} \big)\,,\\[3mm] \mv^{n+1} \is \mv^n + \Delta t\,\big((1-\gamma)\,\ma^n + \gamma\,\ma^{n+1} \big)\,, \label{e:Newmark}\eqe where $\beta$ and $\gamma$ are non-dimensional parameters.\footnote{They should not be confused with the physical parameters $\beta$ and $\gamma$ used for the surface inclination and surface tension in other sections.} According to the generalized-$\alpha$ scheme, $\mr$ is then evaluated for $\mpp^{n+1}$ and \eqb{lllll} \mx^{n+\alpha_\mrf} \is \mx^n \plus \alpha_\mrf\,(\mx^{n+1}-\mx^n)\,, \\[2mm] \mv^{n+\alpha_\mrf} \is \mv^n \plus \alpha_\mrf\,(\mv^{n+1}-\mv^n)\,, \\[2mm] \ma^{n+\alpha_\mrm} \is \ma^n \plus \alpha_\mrm\,(\ma^{n+1}-\ma^n)\,, \label{e:gen-a}\eqe where $0<\alpha_\mrm\leq1$ and $0<\alpha_\mrf\leq1$ are chosen parameters.\footnote{Note that the $\alpha$ introduced by \citet{chung93} corresponds to $1-\alpha$ here.} The global force vectors thus take the form \eqb{lll} \mf \is \mf_{\sF\mathrm{in}}\big(\ma^{n+\alpha_\mrm},\mv^{n+\alpha_\mrf}\big) + \mf_{\sS\mathrm{in}}\big(\ma^{n+\alpha_\mrm}) + \mf_{\sF\mathrm{int}}\big(\mv^{n+\alpha_\mrf},\mpp^{n+1}\big) + \mf_{\sS\mathrm{int}}\big(\mx^{n+\alpha_\mrf}\big) \\[2mm] \plus \mf_\mrc\big(\mx^{n+\alpha_\mrf}\big)+ \mf_\mathrm{supg}\big(\ma^{n+\alpha_\mrm},\mv^{n+\alpha_\mrf},\mpp^{n+1}\big) - \mf_\mathrm{ext}\,, \\[3mm] \mg \is \mg_\mrg\big(\mv^{n+\alpha_\mrf}\big) + \mg_\mathrm{pspg}\big(\ma^{n+\alpha_\mrm},\mv^{n+\alpha_\mrf},\mpp^{n+1}\big)\,. \label{e:globsys}\eqe The temporal inconsistency that is introduced if $\alpha_\mrm\neq\alpha_\mrf\neq1$ is a deliberate feature of the generalized-$\alpha$ method. The system $\mr=\mathbf{0}$ thus reduces to a system of algebraic equations that can be solved for $\mx^{n+1}$, $\mv^{n+1}$, $\ma^{n+1}$ and $\mpp^{n+1}$ given the previous values $\mx^n$, $\mv^n$, $\ma^n$ and $\mpp^n$. One option is to pick $\muu:=[\mv,\,\mpp]$ as the primary unknowns, solve $\mr=\mathbf{0}$ for $\muu^{n+1}$, and then obtain $\ma^{n+1}$ and $\mx^{n+1}$ (which is really only needed for the membrane nodes) from \eqref{e:Newmark}. Since the system $\mr=\mathbf{0}$ is non-linear, the Newton-Raphson method is used.\footnote{A direct sparse solver is used in all subsequent examples apart from the finest droplet discretization in Sec.~5.2, which uses the conjugate gradient method preconditioned by an incomplete LU factorization.} This requires the tangent matrix $\mk$ that is assembled from the elemental entries \eqb{l} \mk^e := \ds\pa{\mr^e}{\muu^{n+1}_e}\,. \label{e:kFE}\eqe It is given in Appendix~\ref{s:FE_kt} for the considered fluid and membrane elements. In the following computations, the Newmark parameters are taken as \citep{chung93} \eqb{lll} \gamma \is \ds\frac{1}{2} - \alpha_\mrf + \alpha_\mrm\,, \\[3mm] \beta \is \ds\frac{1}{4}\big(1-\alpha_\mrf+\alpha_\mrm\big)^2 \eqe using the generalized-$\alpha$ parameters\footnote{They are obtained taking a spectral radius of $\rho_\infty = \frac{1}{2}$ for the first order system, see \citet{jansen99}.} \eqb{l} \alpha_\mrf = \ds\frac{2}{3}\,,\quad \alpha_\mrm = \ds\frac{5}{6}\,. \eqe This choice ensures second order accuracy in time and unconditional stability (for linear problems). \subsection{Normalization}\label{s:norm} In order to implement the above expressions within a computer code\footnote{In this work a self-written parallel Matlab code is used on a 12-core Apple workstation (2x 2.66 GHz 6-Core Intel Xeon, 64 GB DDR3 RAM).} they have to be normalized. The normalization can also help to improve the conditioning of the monolithic system of equations. We therefore chose a length scale $L_0$, time scale $T_0$ and force $F_0$, and use those to normalize all lengths, times and forces in the system. Velocities, masses, fluid densities, fluid viscosities, fluid pressures, membrane densities and membrane stresses are then normalized by the scales \eqb{l} v_0 := \ds\frac{L_0}{T_0}\,,\quad m_0 := \ds\frac{F_0T_0^2}{L_0}\,,\quad \rho_0 := \ds\frac{m_0}{L_0^3}\,,\quad \eta_0 := \ds\frac{F_0T_0}{L_0^2}\,,\quad p_0 := \ds\frac{F_0}{L_0^2}\,,\quad \rho^\mrs_0 := \ds\frac{m_0}{L_0^2}\,,\quad \gamma_0 := \ds\frac{F_0}{L_0}\,. \eqe System \eqref{e:globsys} can then be expressed in the normalized form \eqb{lll} \bar\mf(\bar\muu^{n+1}) \is \bar\mf_{\sF\mathrm{in}} + \bar\mf_{\sS\mathrm{in}} + \bar\mf_{\sF\mathrm{int}} + \bar\mf_{\sS\mathrm{int}} + \bar\mf_\mrc + \bar\mf_\mathrm{supg} - \bar\mf_\mathrm{ext}\,, \\[3mm] \bar\mg(\bar\muu^{n+1}) \is \bar\mg_\mrg + \bar\mg_\mathrm{pspg}\,, \label{e:globsysbar}\eqe where a bar denotes normalization with the corresponding scale from above, e.g. \eqb{l} \bar\mf^e_{\sF\mathrm{in}} = \bar\mm_e\,\bar\ma_e + \bar\mf^e_\mathrm{con}\,, \eqe with \eqb{lll} \bar\mm_e \dis \ds\int_{\bar\Omega^e}\bar\rho\,\mN^T\mN\,\dif\bar v\,,\\[4mm] \bar\mf^e_\mathrm{con} \dis \ds\int_{\bar\Omega^e}\bar\rho\,\mN^T\bar\mB_\mrv\,\bar\mv_e\,\dif\bar v\,, \eqe and $\bar\rho = \rho/\rho_0$, $\dif\bar v = \dif v/L_0^3$, $\bar\mB_\mrv=\mB_\mrv T_0$, $\bar\mv_e=\mv_e/v_0$ and $\bar\ma_e=\ma_e\,T_0/v_0$. All the other quantities appearing in \eqref{e:globsysbar} are normalized in the same fashion. Solving \eqref{e:globsysbar} then gives the normalized unknowns $\bar\mv=\mv/v_0$ and $\bar\mpp=\mpp/p_0$, while \eqref{e:Newmark} can be solved for $\bar\mx=\mx/L_0$ and $\bar\ma=\ma\,T_0/v_0$. \subsection{Mesh motion} Apart from the unknown material velocity $\mv$ and pressure $\mpp$, the discrete mesh velocity $\mv_\mrm$ can also be regarded as an unknown. In that case suitable (differential) equations have to be formulated for $\mv_\mrm$. A simpler approach is to determine the mesh velocity from the membrane velocity using linear interpolation: On the membrane surface the mesh motion is considered Lagrangian, i.e.~$\mv_\mrm=\mv$, whereas it is treated Eulerian ($\mv_\mrm=\mathbf{0}$) beyond a certain distance from the membrane. In-between, simple linear interpolation is used. Details of this are reported in the following examples. Linear interpolation, and ALE in general, does not work for some FSI problems. An example are solids revolving within the fluid. For such cases, other techniques need to be considered. \section{Numerical examples}\label{s:ex} This section presents three numerical examples that range from very low to quite large Reynolds numbers. The first example considers a solid membrane (with no bending resistance), the second example considers a liquid membrane, and the third example considers a solid shell with low bending resistance. The examples exhibit large membrane deformations that lead to strong FSI coupling. \subsection{Fluid-inflated cylinder}\label{s:ex1} The first numerical example considers the radial inflation of a cylindrical membrane due to radial inflow. The numerical solution will be compared to the analytical solution derived in Sec.~\ref{s:ana_infl}. The initial inner radius of the cylinder $R_\mathrm{in}$, the maximum inflow velocity $v_0$ and the fluid density $\rho$ are used for normalization, such that $L_0=R_\mathrm{in}$, $T_0=R_\mathrm{in}/v_0$ and $\rho_0=\rho$. The outer radius of the membrane at initialization time $t=0$ is taken as $R_\mrs=2L_0$. Computationally, only a quarter of the cylindrical domain is modelled with a chosen height of $H=L_0$. Sliding wall conditions\footnote{The normal velocity and the tangential traction are set to zero.} are applied to all fluid boundaries except the membrane surface, where coupling conditions apply, and the inflow boundary, where the radial inflow velocity \begin{equation} v_\mathrm{in}(t) = v_0\left\{\begin{array}{ll} \big(1-\cos(\pi t/T_0)\big)/2~ & \textnormal{for}~t < T_0 \\[1mm] 1 & \textnormal{else} \end{array}\right. \end{equation} is prescribed. The Reynolds number, $Re=\rho\,v_\mathrm{in} \,L_0/\eta$, is chosen as $Re=100$ guaranteeing a purely laminar flow. For water at room temperature ($\rho\approx 1000\,$kg/m$^3$, $\eta=1.00\,$mNs/m$^2$) this implies $v_0=10\,$m/s. The membrane is modelled as a massless, incompressible Neo-Hookean, rubber-like material according to \eqref{e:sig_sol}. The membrane's nondimensional shear stiffness is taken as $\bar\mu = 0.1$. The fluid domain is discretized by $N_\mrf=n_r\times n_\theta\times 1$ quadratic volume elements in $\be_r$, $\be_\theta$ and $\be_3$ direction (see Fig.~\ref{f:infl_cyl}), while the membrane domain is discretized by $N_\mrs=n_\theta\times 1$ quadratic surface elements along $\be_\theta$ and $\be_3$. Tab.~\ref{t:ec_g_conv} shows the considered meshes. \begin{table}[h] \centering \begin{tabular}{|r|r|r|r|r|} \hline total elements & fluid elements & membrane elements & nodes & dofs \\[0.5mm] \hline & & & & \\[-3.5mm] 7 & $6\times1\times1$ & $1\times1$ & 117 & 495 \\[1mm] 42 & $13\times3\times1$ & $3\times1$ & 567 & 2,331 \\[1mm] 100 & $24\times4\times1$ & $4\times1$ & 1,323 & 5,373 \\[1mm] \hline \end{tabular} \caption{Fluid-inflated cylinder: Considered FE meshes based on quadratic Lagrange elements.} \label{t:ec_g_conv} \end{table} The time step is chosen as $\Delta\bar t =0.0025$ for all cases. The radial mesh velocity at time step $t_{n+1}$ is defined by the linear interpolation \eqb{l} v_\mrm\big(R,t_{n+1}\big) = \ds\frac{R-R_\mathrm{in}}{R_\mrs-R_\mathrm{in}}\,v_\mrs(t_n)\,, \eqe where $v_\mrs(t_n)$ is the cylinder's radial velocity at the previous time step. \par Fig.~\ref{f:ec_res1} shows the radial flow field and the membrane displacement due to the cylinder inflation at different time steps. \begin{figure}[!ht] \begin{center} \subfigure[$\bar t=0$]{ \includegraphics[scale=0.115,angle=0]{cfigs/cyl_expansion_1.jpg}} \subfigure[$\bar t=1$]{ \includegraphics[scale=0.115,angle=0]{cfigs/cyl_expansion_2.jpg}} \subfigure[$\bar t=6$]{ \includegraphics[scale=0.115,angle=0]{cfigs/cyl_expansion_3.jpg}} \subfigure[$\bar t=11$]{ \includegraphics[scale=0.115,angle=0]{cfigs/cyl_expansion_4.jpg}} \subfigure[$\bar t=21$]{ \includegraphics[scale=0.115,angle=0]{cfigs/cyl_expansion_5.jpg}} \end{center} \vspace{-6mm} \caption{Fluid-inflated cylinder: Radial flow field $\bar v=v/v_0$ and cylinder expansion at various time steps. Computationally, only a quarter of the system is modelled.} \label{f:ec_res1} \end{figure} The solid membrane is stretched by more than a factor of 3. For the membrane displacement (Fig.~\ref{f:ec_res_xr}) and velocity (Fig.~\ref{f:ec_res_vr}) the numerical result is in perfect agreement with the analytical solution derived in Sec.~\ref{s:ana_infl}; see Eqs.~(\ref{e:ana1_v}) \& (\ref{e:ana1_r}). \begin{figure}[!ht] \begin{center} \subfigure[$r(t)$ at $R=R_\mrs$]{ \includegraphics[scale=1,angle=0]{cfigs/222_r_101_11_2.pdf}} \subfigure[Convergence]{ \includegraphics[scale=1,angle=0]{cfigs/222_errors2_nt.pdf}} \end{center} \vspace{-6mm} \caption{Fluid-inflated cylinder: (a) Membrane position $\bar r=r/L_0$ vs.~time $\bar t=t/T_0$. (Analytical result: green~$\times$, FE solution: red~$+$). (b) Numerical error (L$^2$-norm) vs.~total number of L2 elements (radius $r$:~red~$+$, velocity $v$: green~$\times$, acceleration $a$: blue~$\star$, pressure $p$: magenta~$\square$) at $R=R_\mrs$ and $\bar{t}=21$. The dash-dotted line marks quadratic convergence behavior.} \label{f:ec_res_xr} \end{figure} \begin{figure}[!ht] \begin{center} \subfigure[$v(t)$ at $r=r_\mrs$]{ \includegraphics[scale=1,angle=0]{cfigs/222_v_101_11_2.pdf}} \subfigure[$v(t)$ at $\bar t=21$]{ \includegraphics[scale=1,angle=0]{cfigs/222_v_8400_11_2.pdf}} \end{center} \vspace{-6mm} \caption{Fluid-inflated cylinder: (a) Normalized membrane velocity vs.~time; (b) Normalized fluid velocity vs.~radial position at $t=21\,T_0$. (Analytical result: green~$\times$, FE solution: red~$+$)} \label{f:ec_res_vr} \end{figure} For the pressure shown in Fig.~\ref{f:ec_res_p} we observe deviations from the analytical result \eqref{e:ana1_p} during the transient part and again nearly perfect agreement at the final simulation time. \begin{figure}[!ht] \begin{center} \subfigure[$p(t)$ at $r=r_\mrs$]{ \includegraphics[scale=1,angle=0]{cfigs/222_p_101_11_2.pdf}} \subfigure[$p(t)$ at $\bar t=21$]{ \includegraphics[scale=1,angle=0]{cfigs/222_p_8400_11_2.pdf}} \end{center} \vspace{-6mm} \caption{Fluid-inflated cylinder: (a) Normalized membrane pressure vs.~time; (b) Normalized fluid pressure vs.~radial position at $t=21\,T_0$. (Analytical result: green~$\times$, FE solution: red~$+$)} \label{f:ec_res_p} \end{figure} The numerical results improve for a higher mesh resolution. The finite element discretization and its implementation shows quadratic convergence behavior as expected, see Fig.~\ref{f:ec_res_xr}b. \subsection{Rolling droplet}\label{s:ex2} The second example simulates rolling contact of a liquid droplet on an inclined substrate considering a low Reynolds number and a contact angle of 180$^\circ$. As we expect the motion to come close to the spinning solution of Sec.~\ref{s:ana_spin}, a purely Lagrangian FE description is chosen ($\bv_\mrm=\bv$). This also allows to use a classical contact description between droplet and substrate. \\ There is earlier computational work on rolling droplets \citep{rasool12, Li13, Thampi13, Wind14}. But it is either 2D, or non-FE. So the present study seems to be the first 3D FE simulation of rolling droplets. Novel is also the way contact is treated here -- by using a computational contact algorithm with an active-set strategy. Within that, a no-slip (sticking) condition is assumed on the contact surface, i.e.~\eqref{e:bgc}. If slip occurs, a stick-slip algorithm is needed for the droplet \citep{dropslide}. The droplet setup considers similar parameters as in \citet{dropslide}: An initially spherical droplet with radius $R=L_0$ and volume $V=4\pi L_0^3/3$ is considered under gravity loading, such that $\rho g L_0^3 = \gamma L_0$. For water at room temperature, with $\rho = 1000\,$kg/m$^3$, $g=9.81\,$m/s$^2$ and $\gamma=72.8\,$mN/m, this corresponds to a droplet with $L_0 = 2.72\,$mm and $V=84.6\,\mu$l. The droplet surface has no additional mass, and so $\rho_\mrs = 0$. For further normalization we choose $g_0 = g$ and $\gamma_0=\gamma$, so that $T_0=16.7\,$ms, $F_0=0.198\,$mN and $p_0=26.7\,$Pa. A high fluid viscosity is chosen, i.e.~$\eta=11.9\,$Ns/m$^2$, such that the Reynolds number becomes very small. A suitable definition for the Reynolds number of a rolling droplet is \eqb{l} Re = \ds\frac{\rho\,L_\mrc\,v_\mathrm{mean}}{\eta}\,, \label{e:Re_drop}\eqe where $L_\mrc$ is the diameter of the contact surface and $v_\mathrm{mean}$ is the mean droplet velocity. The penalty parameter for sticking according to contact model \eqref{e:fc} is taken as $\epsilon_\mrc=250\,m^2\,p_0/L_0$, where $m$ characterizes the FE resolution according to Tab.~\ref{t:rdrop}. \begin{table}[h] \centering \begin{tabular}{|r|r|r|r|r|} \hline $m$ & fluid elements & membrane elements & nodes & dofs \\[0.5mm] \hline & & & & \\[-3.5mm] 2 & 128 & 48 & 1,241 & 4,964 \\[1mm] 4 & 832 & 192 & 7,407 & 29,628 \\[1mm] 8 & 6,656 & 768 & 56,157 & 224,628 \\[1mm] 16 & 53,248 & 3,072 & 437,433 & 1,749,732 \\[1mm] \hline \end{tabular} \caption{Rolling droplet: Considered FE meshes based on quadratic Lagrange elements.} \label{t:rdrop} \end{table} Quadratic Lagrange elements are used. The computational runtime per time step (accounting for residual and tangent matrix assembly, contact computation and Newton-Raphson iteration) is about 1 min.~for $m=4$, 20 mins.~for $m=8$ and 100 mins.~for $m=16$. \\ Initially the droplet is at rest. Rolling motion is then induced by inclining the substrate considering the time-varying inclination angle \eqb{l} \beta(t) = \ds\frac{\beta_0}{2}\left\{\begin{array}{ll} \ds1-\cos\frac{\pi t}{t_1} & $for $0\leq t<t_1, \\[2mm] 2 & $for $t_1\leq t\leq t_2, \\[0mm] \ds1+\cos\frac{\pi (t-t_2)}{t_1} & $for $t_2\leq t\leq t_1+t_2, \\[2mm] 0 & $for $t_1+t_2<t\leq t_3, \end{array}\right. \eqe with $t_1=50\,T_0$, $t_2=200\,T_0$, $t_3=350\,T_0$ and the two cases: \\ 1. $\beta_0=10^\circ$ with $\Delta t=8\,T_0/m$, and \\ 2. $\beta_0=20^\circ$ with $\Delta t=4\,T_0/m$. \\ Fig.~\ref{f:rdrop_vt} shows the finite element results for the mean droplet velocity $v_\mathrm{mean}$ for the two cases.\footnote{The mean droplet velocity $v_\mathrm{mean}$ is determined by computing the volume average of the fluid velocity $\bv$ and then taking its component parallel to the substrate surface.} \begin{figure}[h] \begin{center} \unitlength1cm \begin{picture}(0,5.7) \put(-8.05,-.1){\includegraphics[height=58mm]{cfigs/beta20l_v-t.pdf}} \put(0.15,-.1){\includegraphics[height=58mm]{cfigs/beta20l_v-t_z.pdf}} \end{picture} \caption{Rolling droplet: Mean droplet velocity vs.~time for $\beta_0=10^\circ$ and $\beta_0=20^\circ$ using the meshes from Tab.~\ref{t:rdrop}. The right hand side shows an enlargment for $\beta_0=20^\circ$. As seen, the FE results converge upon mesh refinement.} \label{f:rdrop_vt} \end{center} \end{figure} As seen the FE results converge upon mesh refinement. The figure also shows that steady rolling motion is attained at about $t=150\,T_0$ for $\beta_0=20^\circ$, while it is attained almost instantaneously for $\beta_0=10^\circ$ (i.e.~at $t=t_1$). The instantaneous response of $v_\mathrm{mean}$ on $\beta$, for low $\beta_0$, can be also seen from the $v_\mathrm{mean}(\beta)$--plot in Fig.~\ref{f:rdrop_vb}. \begin{figure}[h] \begin{center} \unitlength1cm \begin{picture}(0,5.7) \put(-8.05,-.1){\includegraphics[height=58mm]{cfigs/beta10s_v-b.pdf}} \put(0.15,-.1){\includegraphics[height=58mm]{cfigs/beta20l_v-b.pdf}} \put(-7.85,-.1){a.} \put(.35,-.1){b.} \end{picture} \caption{Rolling droplet: Mean droplet velocity vs.~$\beta$ for $\beta_0=10^\circ$ (a) and $\beta_0=20^\circ$ (b) using $m=16$. The return branch (for decreasing $\beta$) is marked by a dashed line.} \label{f:rdrop_vb} \end{center} \end{figure} Both branches (for increasing $\beta$ and decreasing $\beta$, respectively) are almost identical. For $\beta_0=20^\circ$ on the other hand, the two branches are different. \\ For further illustration, Fig.~\ref{f:rdrop_v} shows the droplet deformation and velocity field $\norm{\bv}$ during rolling. \begin{figure}[h] \begin{center} \unitlength1cm \begin{picture}(0,4) \put(-8.1,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te200_dt0p5_000v.jpg}} \put(-4.95,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te200_dt0p5_100v.jpg}} \put(-1.8,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te200_dt0p5_200v.jpg}} \put(1.35,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te350_dtp5_400v.jpg}} \put(4.65,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te350_dtp5_700v.jpg}} \put(-7.98,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te200_dt0p5_ba000v.jpg}} \put(-4.83,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te200_dt0p5_ba100v.jpg}} \put(-1.68,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te200_dt0p5_ba200v.jpg}} \put(1.47,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te350_dtp5_ba400v.jpg}} \put(4.62,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te350_dtp5_ba700v.jpg}} \end{picture} \caption{Rolling droplet: Velocity magnitude $\norm{\bv}/v_0$ at $t=0$, $t=50\,T_0$, $t=100\,T_0$, $t=200\,T_0$ and $t=350\,T_0$ (left to right) for $\beta_0=20^\circ$ and $m=8$. Only half of the symmetric droplet is shown. In the top panel the symmetry surface is removed and instead a selected material plane is tracked during deformation. A single fluid particle is marked by `$\circ$'.} \label{f:rdrop_v} \end{center} \end{figure} The deformation is considerable and should not be neglected, as has been done in earlier work \citep{rasool12,rasool13}. The figure also shows how the contact surface changes. Initially the contact surface is circular with a diameter of $L_\mrc=1.36\,L_0$. During steady rolling the diameter in rolling direction reduces to $L_\mrc=1.04\,L_0$. Since $v_\mathrm{mean}=0.0268\,L_0/T_0$, the Reynolds number thus becomes $Re=1.04\cdot10^{-3}$ according to \eqref{e:Re_drop}. Fig.~\ref{f:rdrop_v} clearly shows that the advancing and receding droplet halves are not symmetric during rolling.\\ This can also be seen from the pressure distribution shown in Fig.~\ref{f:rdrop_p}. \begin{figure}[h] \begin{center} \unitlength1cm \begin{picture}(0,4) \put(-8.1,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te200_dt0p5_000p.jpg}} \put(-4.95,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te200_dt0p5_100p.jpg}} \put(-1.8,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te200_dt0p5_200p.jpg}} \put(1.35,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te350_dtp5_400p.jpg}} \put(4.65,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te350_dtp5_700p.jpg}} \put(-7.98,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te200_dt0p5_ba000p.jpg}} \put(-4.83,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te200_dt0p5_ba100p.jpg}} \put(-1.68,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te200_dt0p5_ba200p.jpg}} \put(1.47,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te350_dtp5_ba400p.jpg}} \put(4.62,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te350_dtp5_ba700p.jpg}} \end{picture} \caption{Rolling droplet: pressure field $p/p_0$ at $t=0$, $t=50\,T_0$, $t=100\,T_0$, $t=200\,T_0$ and $t=350\,T_0$ (left to right) for $\beta_0=20^\circ$ and $m=8$} \label{f:rdrop_p} \end{center} \end{figure} The fluid pressure is largest at the advancing front of the contact surface. Since the contact surface is flat, the fluid pressure is equal to the contact pressure. Close inspection shows that the pressure is oscillatory in the vicinity of the contact line $\sC$. Those oscillations do not converge with mesh refinement, as the velocity field does. So it seems that the pressure stabilization scheme, described in Sec.~\ref{s:wfF}, is not sufficient to handle the contact boundary of a rolling droplet, even though the static droplet (at $t=0$ and $t=350\,T_0$) poses no problem. The problem may be related to the discontinuity of the contact pressure: it jumps to zero at the contact boundary. The way the fluid velocity, fluid pressure and contact pressure are interpolated (quadratic Lagrange interpolation is used here) seem incompatible. It seems that this problem has not yet been addressed in the literature. Further study is required on the topic. Perhaps $C^1$-continuous interpolation, such as is provided by NURBS, would help. We note that for $\beta=10^\circ$, pressure oscillations also appear, but they are less pronounced. \\ To remove the pressure oscillations, Gaussian smoothing can be used for post-processing. Selecting the variance of the Gaussian distribution as $\sig=1/m$, i.e.~on the order of the nodal distance, gives non-oscillatory pressures; see Fig.~\ref{f:rdrop_ps}. \begin{figure}[h] \begin{center} \unitlength1cm \begin{picture}(0,4) \put(-8.1,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te200_dt0p5_000ps.jpg}} \put(-4.95,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te200_dt0p5_100ps.jpg}} \put(-1.8,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te200_dt0p5_200ps.jpg}} \put(1.35,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te350_dtp5_400ps.jpg}} \put(4.65,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te350_dtp5_700ps.jpg}} \put(-7.98,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te200_dt0p5_ba000ps.jpg}} \put(-4.83,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te200_dt0p5_ba100ps.jpg}} \put(-1.68,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te200_dt0p5_ba200ps.jpg}} \put(1.47,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te350_dtp5_ba400ps.jpg}} \put(4.62,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te350_dtp5_ba700ps.jpg}} \end{picture} \caption{Rolling droplet: smoothed pressure field at $t=0$, $t=50\,T_0$, $t=100\,T_0$, $t=200\,T_0$ and $t=350\,T_0$ (left to right) for $\beta_0=20^\circ$ and $m=8$. See also supplementary movie file \texttt{drop\underline{ }roll\underline{ }p.mpg}.} \label{f:rdrop_ps} \end{center} \end{figure} The smoothed pressure converges with mesh refinement. The pressure distribution shows that the advancing contact surface carries most of the droplet weight (component $\cos\beta \times \rho gV$). Component $\sin\beta \times \rho gV$ is equilibrated by a tangential sticking force. The moment caused by these external forces is equilibrated by the internal moment of the fluid stress. \\ The last plot shows the vorticity (i.e.~spin) component $2\omega_2 := \be_2\cdot(\nabla\times\bv)$ (along the axis of rotation $\be_2$) and the dissipation $\sD=\bsig:\bD$ during rolling; see Fig.~\ref{f:rdrop_WD}. \begin{figure}[!ht] \begin{center} \unitlength1cm \begin{picture}(0,4) \put(-7.2,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te200_dt0p5_100Ws.jpg}} \put(-3.85,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te350_dtp5_400Ws_c.jpg}} \put(-7.08,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te200_dt0p5_ba100Ws.jpg}} \put(-3.93,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te350_dtp5_ba400Ws.jpg}} \put(0.95,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te200_dt0p5_100Disss.jpg}} \put(4.25,1.5){\includegraphics[height=25mm]{cfigs/roll20_m8_te350_dtp5_400Disss_c.jpg}} \put(1.07,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te200_dt0p5_ba100Disss.jpg}} \put(4.22,-.2){\includegraphics[height=18.6mm]{cfigs/roll20_m8_te350_dtp5_ba400Disss.jpg}} \put(-7.6,-.1){a.} \put(0.55,-.1){b.} \end{picture} \caption{Rolling droplet: a.~smoothed vorticity component $2\omega_2$ at $t=50\,T_0$ and $t=200\,T_0$; b.~smoothed dissipation $\sD=\bsig:\bD$ at $t=50\,T_0$ and $t=200\,T_0$; both for $\beta_0=20^\circ$ and $m=8$. The units of $2\omega_2$ are $1/T_0$; the units of $\sD$ are $p_0/T_0$.} \label{f:rdrop_WD} \end{center} \end{figure} Also here smoothing is used. According to Sec.~\ref{s:ana_spin} the vorticity of a spinning sphere is a constant vector with magnitude 2$\omega$. In contrast, the vorticity of a rolling droplet is non-constant: A maximum is attained at the contact boundary and a minimum occurs on the contact surface. Although, away from the contact surface, the vorticity approaches a constant. The behavior is similar for the dissipation: Away from the contact surface, the dissipation is zero and thus agrees with the spinning sphere solution. Non-zero dissipation, associated with shear flow, occurs in the vicinity of the contact surface, with a maximum occurring at the advancing contact front. For longer rolling droplets, or for higher $\beta$, the shear flow becomes more pronounced, such that an ALE formulation is needed for the mesh. On the free surface (which is tracked explicitly within the present scheme) such a formulation needs to be Lagrangian in the normal direction but Eulerian in-plane. The formulation of such an ALE scheme is outside the present scope. \subsection{Flapping flag}\label{s:flag} The third example simulates the flapping motion of a flag. The problem setup of this example is shown in Fig.~\ref{f:flag_ex}. \begin{figure}[h] \begin{center} \unitlength1cm \begin{picture}(0,5.4) \put(-4.7,-.2){\includegraphics[height=55mm]{cfigs/flag_exc.jpg}} \end{picture} \caption{Flapping flag: Side, top and front view of the problem setup. The flag is fixed on the left and its lateral displacement and velocity are monitored at point $A$.} \label{f:flag_ex} \end{center} \end{figure} The flag is modeled as a flexible sheet that is supported on the left hand side. It is excited by a uniform inflow with velocity $v_\mathrm{in}$. The length scale $L_0$, the fluid density $\rho_0$ and the time scale $T_0$ are used to normalize the problem. The remaining parameters are chosen according to Tab.~\ref{t:flag_para}. \begin{table}[h] \centering \begin{tabular}{|l|l|} \hline parameter & normalized value \\[0.5mm] \hline & \\[-3.5mm] inflow velocity & $\bar v_\mathrm{in}=1$ \\[0.5mm] density of the fluid & $\bar\rho = 1$ \\[0.5mm] viscosity of the fluid & $\bar\eta = 1.531 \cdot 10^{-3}$ \\[0.5mm] density of the flag & $\bar\rho_\mrs=1$ \\[0.5mm] shear stiffness of the flag & $\bar\mu = 4.167 \cdot 10^3$ \\[0.5mm] bending stiffness of the flag & $\bar c=0.02$ \\[1mm] \hline \end{tabular} \caption{Flapping flag: Considered inflow and material parameters.} \label{t:flag_para} \end{table} Considering $L_0=0.1$m, $T_0=1$s and $\rho_0=1.2\,$kg/m$^3$, the fluid parameters become $\rho=\rho_0$ and $\eta=18.37\,\mu$Ns/m$^2$, which correspond to the values of air at sea level and $20^\circ$C, while the flag parameters become $\rho_\mrs=0.12\,$kg/m$^2$, $\mu = 5\,$N/m and $c=0.24\,\mu$Nm according to Sec.~\ref{s:norm}.\footnote{Following Sec.~\ref{s:norm}, the bending stiffness needs to be normalized by $c_0=F_0\,L_0$, where $F_0=\rho_0\,L_0^4/T_0^2$.} The Reynolds number of the problem is \eqb{l} Re = \ds\frac{\rho\,L_\mrc\,v_\mathrm{in}}{\eta}\,, \eqe where $L_\mrc$ is the chord length of the flag. For $L_\mrc=3L_0$ and the considered $\rho$ and $\eta$ follows $Re=1960\,\bar v_\mathrm{in}$. At this $Re$ and density ratio\footnote{The density ratio $R_1:=\rho_\mrs/(\rho L_\mrc)$, as defined in \citet{shelley11}, is 1/3 here.}, the flag motion can be expected to be chaotic according to the phase diagram of \citet{connell07}. \\ The flapping flag example is a good test case since the flag motion and the surrounding flow field can become very complex, as the experimental data reported in \citet{shelley11} show. There have been recent 3D simulations that study the problem in detail \citep{hoffman11,banerjee15,gilmanov15,tullio16}. In some of those works immersed boundary methods are used instead of ALE. Such methods are advantageous for very large flag motions that may even involve self-contact. In contrast to earlier work, the flag is discretized here with $C^1$-continuous isogeometric shell elements. Their formulation is the same as the one of Eq.~\eqref{e:f_icfe} with the only exception that $\mf^e_{\sS\mathrm{int}}$ is extended by the internal bending moments according to the formulation of \citet{solidshell} using the Canham bending model. A shell formulation is used in order to regularize the system with bending stiffness. A low stiffness value is used such that the structure remains very flexible. Below a certain threshold value of $c$, the flapping behavior becomes independent of $c$ as is shown later. \\ The fluid domain is discretized with $n_{\sF\mathrm{el}}=8m\times2m\times4m$ quadratic 3D NURBS elements, while the flag is discretized with $n_{\sS\mathrm{el}}=3m\times2m$ quadratic 2D NURBS elements. The number of nodes and dofs resulting from this discretization\footnote{The number of nodes is $n_\mathrm{no}=(8m+4)(2m+3)(4m+4)$; the number of dofs is $n_\mathrm{dof}=4n_\mathrm{no}+n_{\sS\mathrm{el}}$, due to the double pressure nodes on the flag surface.} are listed in Tab.~\ref{t:flag_mesh}. \begin{table}[h] \centering \begin{tabular}{|r|r|r|r|r|} \hline $m$ & fluid elements & membrane elements & nodes & dofs \\[0.5mm] \hline & & & & \\[-3.5mm] 2 & 512 & 24 & 1,680 & 6,744 \\[1mm] 4 & 4096 & 96 & 7,920 & 31,776 \\[1mm] 8 & 32,768 & 384 & 46,512 & 186,432 \\[1mm] \hline \end{tabular} \caption{Flapping flag: Considered FE meshes based on quadratic NURBS elements.} \label{t:flag_mesh} \end{table} On the surface of the flag, double pressure dofs are used to account for pressure jumps as described in Sec.~\ref{s:2xp}. The time step is taken as $\Delta t=0.16\,T_0/m$. The computational runtime per time step is about 3 mins.~for $m=4$ and 25 mins.~for $m=8$. \\ Fig.~\ref{f:flag_x} shows the flag deformation at selected time steps. \begin{figure}[!ht] \begin{center} \unitlength1cm \begin{picture}(0,2.3) \put(-7.9,-.2){\includegraphics[height=23mm]{cfigs/flag_2218v.jpg}} \put(-4.75,-.2){\includegraphics[height=23mm]{cfigs/flag_2246v.jpg}} \put(-1.6,-.2){\includegraphics[height=23mm]{cfigs/flag_2274v.jpg}} \put(1.55,-.2){\includegraphics[height=23mm]{cfigs/flag_2302v.jpg}} \put(4.7,-.2){\includegraphics[height=23mm]{cfigs/flag_2330v.jpg}} \end{picture} \caption{Flapping flag: Deformation at $t=44.36\,$s, $t=44.92\,$s, $t=45.48\,$s, $t=46.04\,$s and $t=46.60\,$s (left to right) for $m=8$; see also supplementary movie file \texttt{flag\underline{ }v.mpg}. The coloring shows the lateral velocity component in the range $\{-1,\,1\}v_0$ (from blue to red). The streamlines of the flow are also shown.} \label{f:flag_x} \end{center} \end{figure} Those are snap-shots of the supplementary movie file \texttt{flag\underline{ }v.mpg}. As expected, the structure performs flag-typical oscillations along its length. Close inspection shows that the flag motion also varies in vertical direction. The pressure field around the flag is shown in Fig.~\ref{f:flag_p}. \begin{figure}[!ht] \begin{center} \unitlength1cm \begin{picture}(0,3) \put(-8.05,1.3){\includegraphics[height=16.3mm]{cfigs/flag_t2218p.jpg}} \put(-2.7,1.3){\includegraphics[height=16.3mm]{cfigs/flag_t2246p.jpg}} \put(2.65,1.3){\includegraphics[height=16.3mm]{cfigs/flag_t2274p.jpg}} \put(-8.05,-.3){\includegraphics[height=16.3mm]{cfigs/flag_t2022p.jpg}} \put(-2.7,-.3){\includegraphics[height=16.3mm]{cfigs/flag_t2050p.jpg}} \put(2.65,-.3){\includegraphics[height=16.3mm]{cfigs/flag_t2078p.jpg}} \end{picture} \caption{Flapping flag: Fluid pressure in the mid-plane at $t=44.36\,$s, $t=44.92\,$s, $t=45.48\,$s, $t=46.04\,$s, $t=46.60\,$s and $t=47.16\,$s (top left to bottom right) for $m=8$. The coloring is in the range $\{-.7,\,1.2\}p_0$ (from blue to red).} \label{f:flag_p} \end{center} \end{figure} The figure also shows the mesh motion around the flag. It is based on the interpolation scheme given in App.~\ref{s:flagALE}. \\ For the chosen parameters, the flapping behavior is still (quite) periodic, as Fig.~\ref{f:flag_xv} shows. \begin{figure}[h] \begin{center} \unitlength1cm \begin{picture}(0,5.7) \put(-7.95,-.1){\includegraphics[height=58mm]{cfigs/flag_x-t.pdf}} \put(0.15,-.1){\includegraphics[height=58mm]{cfigs/flag_v-t.pdf}} \put(-7.85,-.1){a.} \put(.35,-.1){b.} \end{picture} \caption{Flapping flag: Lateral displacement (a) and velocity (b) at point $A$ for various FE discretizations. Symbol `$\circ$' marks the configurations shown in Fig.~\ref{f:flag_x}.} \label{f:flag_xv} \end{center} \end{figure} The period of the main oscillation is 5.60\,s. Apart from the main oscillations, there are also fine scale oscillations, as Fig.~\ref{f:flag_xv}b shows. Fig.~\ref{f:flag_xv} also shows that the simulation results converge with mesh refinement. For the first 20 seconds, mesh $m=4$ already gives quite good results. \\ The model parameters of Tab.~\ref{t:flag_para} affect the flapping behavior of the flag. The influence of $Re$ has been discussed in detail in earlier work, e.g.~see \citet{shelley11}, so the following discussion focuses on the membrane parameters. Three aspects are noteworthy:\\ 1.~For sufficiently low $c$, the flapping behavior (for given $Re$) remains unchanged, i.e. it becomes independent of $c$. \begin{figure}[h] \begin{center} \unitlength1cm \begin{picture}(0,5.7) \put(-7.95,-.1){\includegraphics[height=58mm]{cfigs/flag_c.pdf}} \put(0.15,-.1){\includegraphics[height=58mm]{cfigs/flag_muv.pdf}} \put(-7.85,-.1){a.} \put(.35,-.1){b.} \end{picture} \caption{Flapping flag: Influence of membrane parameters $\bar c$ (a) and $\bar \mu$ (b). The influence of $c$ vanishes below a threshold value of $c$. Increasing $\mu$ leads to smaller velocities but increased fine scale oscillations.} \label{f:flag_cm} \end{center} \end{figure} According to Fig.~\ref{f:flag_cm}a this occurs below $\bar c\approx10^{-3}$. Below that $c$, the flag is effectively a membrane without bending stiffness, and $c$ is only helpful for regularizing the numerical solution.\\ 2.~Increasing $\mu$ leads to increased fine scale oscillations, as Fig.~\ref{f:flag_cm}b shows. Since $\mu$ controls the in-plane stiffness of the flag, those oscillations can be associated with longitudinal vibrations of the flag.\\ 3. Increasing the ratio between fluid and membrane density does not degrade the computational robustness of the proposed monolithic scheme: Fig.~\ref{f:flag_rho} shows the flapping behavior for various density ratios. \begin{figure}[h] \begin{center} \unitlength1cm \begin{picture}(0,5.7) \put(-7.95,-.1){\includegraphics[height=58mm]{cfigs/flag_rho.pdf}} \put(0.15,-.1){\includegraphics[height=58mm]{cfigs/flag_rho_v-t.pdf}} \put(-7.85,-.1){a.} \put(.35,-.1){b.} \end{picture} \caption{Flapping flag: Influence of membrane density $\bar\rho_\mrs$ on the flag displacement (a) and velocity (b). The density ratio affects the frequency and amplitude of vibration as expected. For $\bar\rho_\mrs=3$ and above, the simulation terminates after the flag penetrates the boundary at $\pm L_0$.} \label{f:flag_rho} \end{center} \end{figure} For $\bar\rho_\mrs = \bar\rho$ ($=1$ here), the nodal FE forces due to fluid and membrane inertia are equal in the limit $h_e\rightarrow0$ (since $\dot\bv\approx$ const.~across the element thickness). For all the considered density ratios, the Newton-Raphson iteration at each time step converges to a normalized energy residual of $10^{-27.7}$ within an average of six iterations. The density ratio therefore does not have a negative affect on the computational stability or the conditioning of the system. This is different to partitioned FSI schemes, which have been shown to suffer from a loss of robustness as the inertia forces of the flow become comparable or larger than those of the structure \citep{letallec01,causin05}. The reason lies in the strong effect of the fluid on the structure for high fluid densities that is not well captured by weakly coupled partitioned schemes or requires many staggering steps in strongly coupled partitioned schemes. The extreme case of this effect occurs when $\bar\rho_\mrs=0$, which was considered in the droplet example of Sec.~\ref{s:ex2}. Also in this case no stability issues were encountered in all simulations. \section{Conclusion}\label{s:concl} A unified FSI formulation is presented that is suitable for solid, liquid and mixed membranes. At free liquid surfaces, sticking contact can be accounted for. The fluid flow and the structure are discretized with finite elements using a stabilized fluid formulation and a surface-based membrane formulation. A conforming interface discretization is used between fluid and membrane, which leads to a simple monolithic coupling formulation. On membrane surfaces surrounded by fluid on both sides, double pressure nodes are required. The temporal discretization is based on the generalized-$\alpha$ scheme. Two analytical and three numerical examples are presented in order to illustrate and verify the proposed formulation. They consider fluid flow at low and high Reynolds numbers exhibiting strong FSI coupling. \\ The proposed formulation is very general and thus suitable as a basis for further research. In order to increase efficiency, the formulation can be extended to boundary elements (for low $Re$) or turbulence models (for high $Re$). Under current study is the use of enriched finite element discretizations \citep{ZEN} that are suitable to efficiently capture boundary layers \citep{ESEflow}. Another extension of the present formulation is to re-examine the pressure stabilization scheme at contact boundaries. This would be especially important in the presence of sharp contact angles. Such a formulation would then allow for a detailed flow analysis of droplets on rough surfaces. \bigskip {\Large\bf Acknowledgements} The authors are grateful to the German Research Foundation (DFG) for supporting this research under grants GSC 111 and SA1822/3-2. The authors also wish to thank Maximilian Harmel and Raheel Rasool for proofreading the manuscript.
# A bathtub contains 45 gallons of water and the total weight of the tub and water is approximately... ## Question: A bathtub contains 45 gallons of water and the total weight of the tub and water is approximately 760.725 pounds. You pull the plug and the water begins to drain. Let v represent the number of gallons of waters that has drained from the tub since the plus was pulled. Noe that water weighs 8.345 pounds per gallon. a. Write an expression in terms of v that represents the weight of the water that has drained from the tub (in pounds). b. Write an expression in terms of v that represents the total weight of the tub and water (in pounds). c. How much does the tub weight when there is no water in it? d. If the weight of the tub and water is 618.41 pounds, how many gallons of water are in the tub? ## Algebraic Expressions In this question, we form algebraic expressions at each step to solve the question. When an expression is equated to a value, we have an equation as well. a) The weight of the water that must have drained out will be the product of the quantity of water that has drained out (v gallons) and the weight of water per gallon (8.345 pounds). This is: $$8.345v$$ b) The total weight when v gallons have drained out will be the difference between the initial weight and the weight that has reduced. So, $$760.725-8.345v$$ c) When there is not water, the amount of water that must have been drained is v=45 gallons. So, the weight of the tub is: $$760.725-8.345*45=385.2\text{ pounds}$$ d) First, let's find the amount of water that must have drained out by solving the following equation. \begin{align} 760.725-8.345v&=618.41\\ -8.345v&=618.41-760.725\\ v&=\frac{142.315}{8.345}\\ &=17.05 \end{align} If 17.05 gallons has drained out, the amount remaining is {eq}45-17.05=27.95\text{ gallons} {/eq}.
# Magnetic Field from Two Wires 1. Mar 8, 2008 ### cse63146 1. The problem statement, all variables and given/known data In this problem, you will be asked to calculate the magnetic field due to a set of two wires with antiparallel currents as shown in the diagram . Each of the wires carries a current of magnitude . The current in wire 1 is directed out of the page and that in wire 2 is directed into the page. The distance between the wires is 2d. The x axis is perpendicular to the line connecting the wires and is equidistant from the wires. Which of the vectors best represents the direction of the magnetic field created at point K (see the diagram in the problem introduction) by wire 1 alone? Enter the number of the vector with the appropriate direction. 2. Relevant equations Right Hand Rule for a Straight Wire 3. The attempt at a solution Using the right hand rule for a straight wire, I found out that the direction of the magnetic field is counter - clockwise. But I'm not sure which vector represents that. Any suggestions? 2. Mar 9, 2008 ### Shooting Star The magnetic field is a vector quantity and has a single direction at a given point. If you draw a magnetic line of force through K, due to the field of wire 1 alone, in which direction does the tangent to the line of force at K point? 3. Mar 9, 2008 ### physixguru The magnetic field surrounding a current-carrying wire is tangent to a circle centered on that wire. Use the right-hand rule to find which way it points. For the current coming out of the page, point your thumb in the direction of current (out) and your fingers curl in the direction of the field (counter clockwise). 4. Mar 9, 2008 ### cse63146 Vector #8 best describes the direction of the magnetic field. There's another question that asks: Which of these vectors best represents the direction of the net magnetic field created at point K by both wires? Since $$I_2 = I_1$$, the magnetic field due to wire #2 be clock wise and the current due to wire #1 be counter - clockwise, would the vector be #1? 5. Mar 9, 2008 ### Shooting Star Let's settle this one first. Draw a circle with the centre at wire 1 and passing through K. Now draw the tangent to the circle at K. Do you still think it's vector #8? 6. Mar 9, 2008 ### cse63146 If I drew that diagram correctly, the answer would be vector #7. 7. Mar 9, 2008 ### Shooting Star Think about the right hand rule. The current is coming toward you. It should be #3. Try the other questions now. 8. Mar 9, 2008 ### cse63146 Thanks, got the other few questions, but I'm having trouble with this one: Point L is located a distance d$$\sqrt{2}$$ from the midpoint between the two wires. Find the magnitude of the magnetic field created at point L by wire 1. So I need to use the Biot - Savart Law: $$\frac{\mu_0 I}{2 \pi d}$$ to find the distance, I need to use the pythogorean thereom and I get distance to be $$\sqrt{3 d^2} = \sqrt{3} d$$ and I would just plug that it for the d in the Biot Savart Law to obtain the equation for the magnetic field of point L by wire 1? 9. Mar 9, 2008 ### Shooting Star That's it. 10. Apr 10, 2011 ### sdodson I need to find the magnitude of the net magnetic field at point L in the figure due to both wires and I am having some trouble. I understand that the y components of the magnetic fields from the wires will cancel and just the x components will be left. Therefore, I found the magnitudes of the B_1 field and the B_2 field (magnetic fields due to wires 1 and 2) and used trig to find only the x-components and added these. However, my answer is off by a some factor and I cannot figure out why. Using t as the angle K-1-L and K-2-L, I got: B_1x = B_1(sin(t)) = (($$\mu$$ * I)/(2 $$\pi$$ d $$\sqrt{}3$$) *($$\sqrt{}2$$ d)/$$\sqrt{}3$$d) B_2x = B_1x I multiplied B_1x by 2 to get my final answer of: ($$\sqrt{}2$$$$\mu$$I)/(3$$\pi$$d) * note: mu in equations is supposed to be mu sub not and it strangely appears as if 2 and 3 are raised to the pi, they are multiplied I believe I must have simply made a math error. Any insights would be greatly appreciated! Thank You!
# Reference for “multi-monoidal categories” I have attempted to find a definition of a monoidal category which incorporates $n$-fold tensor products instead of just binary tensor products. Definition. A "multi-monoidal category" consists of • a category $\mathcal{C}$, • for every $n \geq 0$ a functor $T_n : \mathcal{C}^n \to \mathcal{C}$, denoted by $(A_1,\dotsc,A_n) \mapsto A_1 \otimes \dotsc \otimes A_n$, • an isomorphism $\eta : T_1 \cong \mathrm{id}_{\mathcal{C}}$, • for all $n_1,\dotsc,n_k \geq 0$ an isomorphism $$\mu_{n_1,\dotsc,n_k} : T_k \circ (T_{n_1} \times \dotsc \times T_{n_k}) \cong T_{n_1+\dotsc+n_k}.$$ The following coherence conditions should hold: • Coherence of $\eta$ with $\mu$: We have $\mu_{n_1} = \eta \circ T_{n_1} : T_1 \circ T_{n_1} \to T_{n_1}$. • Coherence of $\mu$ with $\mu$: The square $$\begin{array}{cc} T_k \circ \bigl(T_{n_1} \circ (T_{m_{11}} \times \dotsc \times T_{m_{1n_1}}) \times \dotsc \times T_{n_k} \circ (T_{m_{k1}} \times \dotsc \times T_{m_{k n_k}})\bigr) & \rightarrow & T_k \circ (T_{m_{11}+\dotsc+m_{1 n_1}} \times \dotsc \times T_{m_{k1}+\dotsc+m_{k n_k}}) \\ \downarrow && \downarrow \\ T_{n_1+\dotsc+n_k} \circ (T_{m_{11}} \times \dotsc \times T_{m_{k n_k}}) & \rightarrow & T_{m_{11}+\dotsc+m_{k n_k}} \end{array}$$ commutes. Notice that when $\mathcal{C}$ is discrete, this is the monadic definition of a monoid (as compared to the usual definition). Questions. (1) Did I forget some coherence condition? (2) Is this concept already known? Does it have a name? It really looks like the most natural thing in the world, especially when you think "operadic" or "monadic". (3) Most important for me: Is this concept equivalent to the definition of a monoidal category? If yes, what is a reference for this? The idea for the equivalence is straight forward (the $n$-fold tensor product is an iteration of binary tensor products etc.), but I believe that it will probably require much work to check the coherence conditions in both directions. • Almost surely what you've written down is an action of the operad Assoc on a category. The equivalence with the standard definition is probably "implicit in results of Mac Lane", or some such muttering. I'm reminded of a definition of "symmetric monoidal category" that I think I saw in a paper of Deligne's (although I don't remember where exactly) that asks for a functor $\mathcal C^S \to \mathcal C$ for each finite set $S$, with natural maps that include the permutations of $S$. So if you think the coherences are correct (I didn't check carefully), then go ahead and use it! – Theo Johnson-Freyd Jan 8 '15 at 5:27 • I believe that what you're after is Max Kelly's notion of a club; there is a club whose pseudo-algebras are precisely monoidal categories, and if you unravel the definition of being a pseudo-algebra for this club you should get what you wrote above. – Dan Petersen Jan 8 '15 at 7:25 • Also relevant: mathoverflow.net/questions/8252/… – André Henriques Jan 8 '15 at 9:41 • You should look at the beginning of Lurie's Higher Algebra section 2 (and then the rest of it) – Adam Gal Jan 8 '15 at 10:28 • Have you considered an approach similar to Durov? "Algebraic (Strong-/Pseudo-) Monads on Cat" or something like this. – Gerrit Begher Jan 17 '15 at 8:32
# Limit of a function with e Calculus Level 2 If $f(x)$ is a continuous function defined on the domain $x > -\frac{1}{2}$ that satisfies $(e^x-1)f(x)=\ln(1+2x),$ what is $f(0)?$ ×
# Enthalpy and stoichiometry 1. ### Agent M27 171 1. The problem statement, all variables and given/known data The change in internal energy for the combustion of 1.0 mol of octane at a pressure of 1.0 atm is 5084.5 Kj. If the change in enthalpy is 5074.2 Kj, how much work is done during the combustion? Find work in Kj. 2. Relevant equations $$\Delta$$E=$$\Delta$$H + (-P$$\Delta$$V) w=-P$$\Delta$$V 3. The attempt at a solution $$\Delta$$E=5084.5Kj $$\Delta$$H=5074.2Kj 5084.5-5074.2=-1atm($$\Delta$$V) -10.3=$$\Delta$$V w=-(1)(-10.3)=10.3 10.3L*atm x 101.3J=1043.39J $$\frac{1043.39}{1000}$$= 1.04339 Kj This is an online homework set, so when I input the answer it usually will give me a hint if I am close, but I am not getting anything. Can anyone spot my error? Thanks in advance. Joe 2. ### nessiejo236 1 I don't know if you have already figured this out, but I thought I'd let you know I found your problem. The way you solved it, your change in volume ended up in kJ/atm when it should be in liters. I just did a track and took 10300J and divided it by 101.3 J to end up with about 101.68 L. 10.3kJ/atm x 1000J/1kJ x 1 Latm/101.3J= 101.68 L. Hopefully that helped if you haven't already figured it out!
GTU Computer Engineering (Semester 4) Mathematics 4 May 2016 Total marks: -- Total time: -- INSTRUCTIONS (1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary Short Questions 1(a) The complex conjugate of $$\dfrac{i}{1-i}$$ is _______ . 1 M 1(b) A mapping which preserves only magnitude is known as _______ mapping. 1 M 1(c) IF z = cos θ + i sin θ then sin nθ = _______ . 1 M 1(d) The value of $$\int _c \dfrac{e^z}{(z-3)^2}dz$$ where C: |z| = 2 is _______ . 1 M 1(e) Is the set |z-1+2i| ≤ 2 domain? 1 M 1(f) Find the principal value of i(1-i). 1 M 1(g) Prove $$\lim_{z\rightarrow 1}\dfrac{iz}{3}=\dfrac{i}{3}$$ by definition. 1 M 1(h) Show that sin(log ii)=-1 1 M 1(i) Define residue 1 M 1(j) While evaluating a definite integral by trepezoidal rule, the accuracy can be increased by taking _______ number of sub-intervals 1 M 1(k) The relationship between E and is _______ . 1 M 1(l) The order of convergence in Newton - Raphson method is _______ . 1 M 1(m) Iterative formula for finding the square root of N by Newton - Raphson method is _______ . 1 M 1(n) Putting n = 1 in the Newton-cote's quadrature formula rule obtained is _______ . 1 M 2(a) Find all the roots of (1+i)2/3. 3 M 2(b) Show that f(z)=log z is analytic everywhere except at the origin. 4 M Solve any one question from Q.2(c) & Q.2(d) 2(c) Prove that u = x2 - y2 and $$v=-\dfrac{y}{x^2+y^2}$$ are harmonic but u + iv is not regular. 7 M 2(d) Examine the nature of the function $$f(z)=\left\{\begin{matrix} \dfrac{x^3y(y-ix)}{x^6+y^2} & ,z\neq 0\\ 0 &, z=0 \end{matrix}\right.$$ in the region including the origin. 7 M Solve any three question from Q.3(a), Q.3(b), Q.3(c) & Q.3(d), Q.3(e), Q.3(f) 3(a) Find analytic function f(z) = u + iv if v = ex (x sin y + y cos y). 3 M 3(b) Evaluate using Cauchy's integral formula $$\int _c \dfrac{3z^2+z+1}{(z^2-1)(z+3)}dz$$ where C is the circle |z|=2. 4 M 3(c) Using contour integration evaluate the real integral $$\int _0 ^{2\pi} \dfrac{\cos 3\theta}{5-4\cos \theta}.$$ 7 M 3(d) If f(z)=u+iv is analytic in domain D then prove that $$\left ( \dfrac{\partial ^2}{\partial x^2}+\dfrac{\partial ^2}{\partial y^2} \right )|Re(f(z)|^2=2|f'(z)|^2$$ 3 M 3(e) Determine the linear fractional transformation that maps z1 = 0, z2 = 1, z3=&infin onto w1, w2=-i, w3=1 repectively. 4 M 3(f) Expand $$f(z)=\dfrac{1}{(z+2)(z+4)}$$ valid for the region (i) |z| < 2 (ii) 2 < |z| < 4 (iii) |z| > 4 7 M Solve any three question from Q.4(a), Q.4(b), Q.4(c) & Q.4(d), Q.4(e), Q.4(f) 4(a) Find the dominant Eigen values of $$A=\begin{bmatrix} 3 & -5\\ -2 & 4 \end{bmatrix}$$ by power method. 3 M 4(b) Evaluate $$\int _0^6\dfrac{dx}{1+x^2}$$ by using (i) Trapezoidal rule (ii) Simpson's 1/3 rule taking h=1. 4 M 4(c) Prove that the transformation $$w=\dfrac{z}{1-z}$$ maps the upper half of the z-plane unto the upper half of the w-plane. What is the image of |z| = 1 under this transformation? 7 M 4(d) Solve the system of equations by Gauss-Seidal Method 10x1 + x2 + x3 = 6 x1 + 10x2 + x3 = 6 x1 + x2 + 10x3 = 6 3 M 4(e) Evaluate the integral $$\int ^1_0\dfrac{dt}{1+t}$$ by one point, two point and three point Gaussian formula. 4 M 4(f) The population of the town is given below, estimate the population of the year 1895 and 1930 using suitable interpolation Year , x 1891 1901 1911 1921 1931 Population (in thousand) f(x) 46 66 81 93 101 7 M Solve any three question from Q.5(a), Q.5(b), Q.5(c) & Q.5(d), Q.5(e), Q.5(f) 5(a) Find up to four decimal places the root of the equation sin x=e-x , using Newtons Rapson's Method starting with x0=0.6. 3 M 5(b) Find the negative root of x3 - 7x + 3 = 0 by bisection method up to two decimal places. 4 M 5(c) Apply improved Euler method to solve the initial value problem $$\dfrac{dy}{dx}=\log (x+y)$$ with y(1) = 2 taking h = 0.2 for x = 1.2 and x = 1.4 correct up to four decimal places. 7 M 5(d) Express the function $$\dfrac{3x^2-12x+11}{(x-1)(x-2)(x-3)}$$ as a sum of partial fraction, using Lagrange's formula. 3 M 5(e) Using Newton's divided difference formula find a polynomial and also find f(6). x 1 2 4 7 f(x) 10 15 67 430 4 M 5(f) Apply fourth order Runge ' Kutta Method to find y(0, 2) given $$\dfrac{dy}{dx}=x+y,$$ y(0) = 1 (Taking h = 0.1). 7 M More question papers from Mathematics 4
Combining Resistors in Series & Parallel Video Lessons Concept Problem: In the circuit of Fig. E26.15, each resistor represents a light bulb. Let R1 = R2 = R3 = 4.50 Ω and ε = 9.00 V. (a) What is the current in each of the remaining bulbs R1, R2, and R3?(b) What is the power dissipated in each of the remaining bulbs? FREE Expert Solution Equivalent resistance for 2 resistors in parallel: $\overline{){{\mathbf{R}}}_{{\mathbf{eq}}}{\mathbf{=}}\frac{{\mathbf{R}}_{\mathbf{1}}{\mathbf{R}}_{\mathbf{2}}}{{\mathbf{R}}_{\mathbf{1}}\mathbf{+}{\mathbf{R}}_{\mathbf{2}}}}$ Equivalent resistance for resistors in series: $\overline{){{\mathbf{R}}}_{{\mathbf{eq}}}{\mathbf{=}}{{\mathbf{R}}}_{{\mathbf{1}}}{\mathbf{+}}{{\mathbf{R}}}_{{\mathbf{2}}}{\mathbf{+}}{\mathbf{.}}{\mathbf{.}}{\mathbf{.}}{\mathbf{+}}{{\mathbf{R}}}_{{\mathbf{n}}}}$ Current: $\overline{){\mathbf{i}}{\mathbf{=}}\frac{\mathbf{V}}{\mathbf{R}}}$ Power: $\overline{){\mathbf{P}}{\mathbf{=}}{{\mathbf{i}}}^{{\mathbf{2}}}{\mathbf{R}}}$ (a) Req = R + (R)(R)/(R + R) = R + R/2 = 1.5R Req = (1.5)(4.50) = 6.75Ω 85% (249 ratings) Problem Details In the circuit of Fig. E26.15, each resistor represents a light bulb. Let R1 = R2 = R3 = 4.50 Ω and ε = 9.00 V. (a) What is the current in each of the remaining bulbs R1, R2, and R3? (b) What is the power dissipated in each of the remaining bulbs?
# According to MOT, two atomic orbitals overlap resulting in the formation of molecular orbital formed. Number of atomic orbitals overlapping together is equal to the molecule orbital formed. The two atomic orbital thus formed by LCAO (linear combination of atomic orbital) in the phase or in the different phase are known as bonding and antibonding molecular orbitals respectively. The energy of bonding molecular orbital is lower than that of the pure atomic orbitals by an amount Delta. This known as the stabilization energy. The enerby of antibonding molecular orbital in increased by Delta' (destabilisation energy). <br> Q. How many nodal plane is present in sigma_(s and p) bonding molecular orbital ? Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Updated On: 4-6-2021 Apne doubts clear karein ab Whatsapp par bhi. Try it now. Watch 1000+ concepts & tricky questions explained! 1.8 K+ 89 Text Solution zero123
This is the archived website of SI 486H from the Spring 2016 semester. Feel free to browse around; you may also find more recent offerings at my teaching page. # Unit 1: Discrete Math After some introductory remarks, we'll go into some of the underlying math on probability and information theory that will get used over and over throughout the class. We'll just cover the basics here; expect more math as we go along and need to use it! # 1 Introduction In this class, we are going to learn about some of the ways random numbers get used in computing. To paraphrase the textbook, programs that make use of random numbers are often simpler or faster than their non-random counterparts (or both!). We will start by examining what random numbers are and how computer programs can obtain them. Then the bulk of the course will be spent looking at some well-known problems that can be solved more quickly or more simply using randomized algorithms. Finally, we'll look at things from a theoretical standpoint and ask whether there are some larger implications of all this. But first, let's look at an example that illustrates the kind of way randomness can be used effectively. ## 1.1 Airplane terrorists Consider the following scenario: An airplane carries 200 passengers. Due to strictly-enforced regulations, each passenger can carry on at most 3 ounces of liquid. But with 60 ounces of liquid, a bomb could be constructed to crash the plane. Therefore, if 20 or more passengers all manage to get 3 ounces of bomb-making liquid on the plane, there is a big problem. Fortunately, the vast majority of flights have no terrorists whatsoever: on at least 999 out of every 1000 flights, not a single passenger carries any bomb-making liquid. To be clear, the input to any algorithm is going to be a list of 200 passengers, in some arbitrary order. All the algorithm can do is screen a given passenger for bomb-making liquid, and get a yes/no answer. We want to develop effective ways of detecting and ultimately avoiding flights that are hijacked as described above. There is a way to test if any single passenger has bomb-making liquid, but it is somewhat disruptive and so we'd like to perform as few tests as possible. Therefore we have two (possibly competing) goals: 1. Allow only non-hijacked planes to fly 2. Screen the fewest number of passengers for bomb-making liquid The first goal can be classified as correctness, and the second as efficiency. In designing typical algorithms, we view the correctness requirement as absolute and work hard to optimize the efficiency. A theme we will repeatedly see with randomized algorithms is that they allow trade-offs between correctness and efficiency. Often, the loss of a correctness guarantee (a small chance that the algorithm gives the wrong answer) is more than offset by the gain in efficiency. # 2 Discrete Probability In order to understand randomness, we need to understand probability. We'll be covering some math concepts as needed throughout the class, but just to get started we need to cover some basics. You are strongly encouraged to read the entire chapter "An introduction to discrete probability", linked in the readings above. ## 2.1 Experiments and Outcomes An experiment is anything for which the outcome may not be certain. This could be something like rolling a die, or flipping a coin, dealing a hand of cards in poker, drawing winning lottery numbers, or running a randomized algorithm. More formally, an experiment is defined as a set of outcomes. (This set is usually written as $$\Omega$$ and is also called the sample space.) Very importantly, the set of outcomes must satisfy two properties: 1. They must be exhaustive, meaning that they contain every possibility. 2. They must be mutually exclusive, meaning that it's impossible to simultaneously have two different outcomes. For example: • Flipping a coin: There are two outcomes, heads or tails. • Rolling a six-sided die: There are 6 outcomes, one for each side of the die. • Dealing a hand in poker: There are $$\binom{52}{5} = 2598960$$ ways to choose 5 cards out of 52. When we are talking about a complicated process (such as a sophisticated randomized algorithm), the outcomes are always simple; there just might be way too many of them to count. And the outcomes depend on the random choices only, not on whatever we're trying to accomplish. For example, in poker, you're trying to win (perhaps by getting a good hand). Or maybe you're trying to lose. Or maybe you're playing cribbage instead. None of that changes the set of outcomes for a five-card hand that is dealt randomly from a 52-card deck. ## 2.2 Probability Every outcome in the sample space has some known probability that it will occur. This probability is a number between 0 and 1, where 0 means "definitely won't happen" and 1 means "definitely will happen." (Note that 1 = 100%, but in hardcore math mode we just stick with the decimals thank you very much.) This set of probability assignments is called a probability distribution, and formally it's a function $$\Pr$$ that maps from the sample space $$\Omega$$ to real numbers in the range $$[0,1]$$. For some outcome $$\omega$$, we write $$\Pr[\omega]$$ as the probability that $$\omega$$ will happen. Because the set of outcomes has to be exhaustive and mutually exclusive, it's guaranteed that the sum of probabilities of all outcomes must equal 1; that is: $\sum_{\omega \in \Omega} \Pr[\omega] = 1$ The probability distribution of the outcomes is usually known in advance. And frequently, the probability of every outcome is the same. This is called the uniform distribution. For example: • Flipping a coin: $$\Pr[\text{heads}] = \Pr[\text{tails}] = \tfrac{1}{2}$$ • Rolling a six-sided die: The probability of rolling a 2 is $$\tfrac{1}{6}$$. ## 2.3 Combining Experiments Most complex situations, such as playing a complete game or running an algorithm, don't involve just a single randomized experiment but many of them. Fortunately, combining multiple independent experiments is really easy. Say I have experiments $$Exp_1$$ and $$Exp_2$$ with sample spaces $$\Omega_1$$ and $$\Omega_2$$. If these experiments are conducted independently from each other (so that the outcome of one doesn't in any way influence the outcome of the other), then they are combined as follows: • The combined sample space is the set of pairs of outcomes from the two experiments, written formally as a cross-product $$\Omega_1 \times \Omega_2$$. • The probability of any pair of outcomes is the product of the original probabilities. For example, say I roll a 6-sided die with one hand and flip a coin with the other hand. This combined experiment now has 12 outcomes: $(1,H), (2,H), (3,H), (4,H), (5,H), (6,H), (1,T), (2,T), (3,T), (4,T), (5,T), (6,T)$ Each the probability of rolling a 3 and getting tails is $$\Pr[(3,T)] = \Pr[3] \cdot \Pr[T] = \tfrac{1}{6} \tfrac{1}{2} = \tfrac{1}{12}.$$ ## 2.4 Events So far, none of this should be really surprising or even interesting. That's because we usually don't care about the outcomes - their probabilities are already known to start out with, and are usually uniformly distributed so the probabilities are all the same. Boring. What we really care about is some combination of outcomes that has some meaning relative to whatever game or algorithm or whatever else we're trying to accomplish. For example, in poker, you would really like to get a straight flush. This doesn't represent a single outcome, but multiple possible outcomes, because there are multiple different hands that would give you a straight flush. An event in probability is a combination (subset) of outcomes, that has some meaning. Each outcome itself is called a simple event. The probability of an event is the sum of the probabilities of the outcomes that make it up. Back to poker, there are 40 possible hands (outcomes) that result in a straight flush (the low card can be A up to 10, in any of 4 suits). So the straight flush "event" consists of these 40 hands. And the probability of getting a straight flush is $\frac{40}{\binom{52}{5}} = \frac{40}{2598960} \approx 0.000015391.$ For another example, consider the rolling two six-sided dice (independently). We know there are $$6\times 6 = 36$$ equally-likely outcomes. What's the chance that the sum of the two dice equals 10? Well, there are 3 outcomes that make this happen: (4,6), (5,5), and (6,4). So the probability of getting a sum of 10 is $$\tfrac{3}{36} = \tfrac{1}{12}$$. ## 2.5 Random variables We've already talked about one function on the outcomes of an experiment, which is the probability measure $$\Pr$$ that maps each outcome to some number between 0 and 1. Sometimes we also want to associate some numerical value with each outcome, which has nothing to do with its probability. For example, each possible lottery ticket might be associated to the amount of money that ticket will win. The probabilities of every possible ticket are the same, but the winnings are definitely not! For largely historical reasons, these functions are not called (or written as) functions, but instead are called . They are usually written with a capital letter like $$X$$ or $$Y$$. For example, if we go back to the experiment of rolling two six-sided dice, define the random variable $$X$$ to be the sum of the two dice. We just figured out the probability that this equals 10, which can now be written conveniently as $\Pr[X = 10] = \tfrac{1}{12}.$ The random variable notation is a little strange to get used to, but it also allows us to write down more complicated events easily. For example, consider the event that the two dice add up to 10 . There are actually 6 outcomes that make this happen: the three ways to make 10, plus the two ways (5,6) and (6,5) to make 11, and the single way (6,6) to make 12. Therefore $\Pr[X \ge 10] = \tfrac{6}{36}.$ In this way, random variables are nice not only as a way of assigning a value to each outcome, but also as a way of describing more complicated events. You just have to remember one thing (repeat after me): A random variable is a function, not a variable. Once we have a random variable, it's natural to ask about some statistical measures on that random variable, such as mean, mode, variance (standard deviation), and so on. The mean, or average, is the most important one, and it has a special name: the expected value. Formally, the expected value of a random variable $$X$$, written as $${\mathop{\mathbb{E}}}[X]$$, is a weighted average formed by summing the value of $$X$$ for each outcome, times the possibility of each outcome. Here's a complete definition: Definition: Expected value Suppose an experiment has $$n$$ outcomes, each of which occurs with probability $$p_i$$. And suppose that a random variable $$X$$ associates with each outcome some value $$v_i$$. Then the expected value of $$X$$ is the sum of the value of each event, times the probability of that event, or ${\mathop{\mathbb{E}}}[X] = \sum_{i=1}^n p_i\ v_i$ Note we can also write this sum in another way as: ${\mathop{\mathbb{E}}}[X] = \sum_i i \cdot \Pr[X = i]$ ## 2.6 More distributions We already said that the probability distribution of the set of outcomes is usually (but not always) uniform - meaning every outcome is equally likely. A random variable also defines a probability distribution, if we consider the probability that the random variable takes on each possible value. And these distributions are usually not uniform. For example, consider the experiment of flipping three coins, and the random variable $$X$$ defined as the number of heads among the three coin flips. For each of the $$2^3 = 8$$ outcomes, we can easily see the value of this random variable: Outcome $$X$$ HHH 3 HHT 2 HTH 2 HTT 1 THH 2 THT 1 TTH 1 TTT 0 The distribution of the outcomes is uniform - each occurs with probability $$\tfrac{1}{8}$$. But the distribution of $$X$$ is seen by flipping this table around, adding together the rows with the same value: $$X$$ Num. of Outcomes $$\Pr[X=i]$$ 0 1 1/8 1 3 3/8 2 3 3/8 3 1 1/8 In other words, you're much more likely to get 1 head or 2 heads than you are to get 0 or 3. This distribution actually has a name - it's called a binomial distribution with parameters $$p=.5$$ and $$n=3$$. You can find much more about the binomial distribution on Wikipedia. Other distributions that might come up in this class, along with my pithy description, are: • Uniform distribution (rolling a fair $$n$$-sided die) • Bernoulli distribution (flipping a weighted coin) • Binomial distribution (flipping a weighted coin $$n$$ times and counting how many heads) • Geometric distribution (flipping a weighted coin until the first time it comes up heads, and counting how many flips that took) ## 2.7 Calculating probabilities and expectations There are going to be many, many times when we want to compute the probability of some event, or the expected value of some random variable. And of course these skills are useful in a variety of real-like situations, not just in analyzing randomized algorithms. There are basically three paths to take to perform one of these calculations: 1. List out every one of the outcomes, and then add up the probabilities (or the weighted sum, for an expectation). Since we always know the probability of the outcomes - they're usually all equally likely - this is mostly a matter of counting. 2. Estimate the probability using some approximations or bounds. We will see some of these as we go along in this class, and if you look up the distributions mentioned above you can find likes to some popular bounds or estimates for those distributions. 3. Approximate by actually running an experiment. This involves programming of course, and is a really excellent way to check your work from one of the first two methods. Methods 1 and 3 are immediately available to you, and based on the definitions you should technically be able to calculate any probability or expectation that we could come up with. Of course the problem will often be that there are far too many possibilities to list them all out manually. Throughout the course, we will learn some counting techniques and math tricks to make method 1 easier, and we'll also see in many cases that just having an estimate or a bound (method 2) is good enough. # 3 Back to algorithms Now that we understand a little bit about probability, let's go back to the original example and see how probabilistic concepts show up in designing and analyzing algorithms. ## 3.1 Deterministic algorithm A deterministic algorithm is the kind of algorithm you are used to: it takes some input, does some computation that depend only on that input, and then produces some output. The important thing here is that the actions taken by the algorithm depend only on whatever the input is. So if you run the algorithm twice on the same input, the algorithm will do exactly the same thing. In the case of the "airplane terrorists", we came up with the following deterministic strategy: 1. Screen the first 181 passengers 2. If any of the first 181 has bomb liquid, screen all 200 3. If at least 20 passengers have bomb liquid, don't let the plane fly. This algorithm is correct because it's guaranteed not to let a hijacked plane fly. If the plane has at least 20 terrorists, we will definitely screen at least one of them on step 1, and then the entire plane will be checked and ultimately grounded. Conversely, if the plane does not have at least 20 terrorists, there is no way that it will be grounded by our algorithm. A standard, worst-case analysis of this algorithm is pretty straightforward: the worst case is that we have to screen all 200 passengers. So by this analysis our algorithm is no better than just screening every passenger, every time. That seems wrong through, because we know that most flights won't have any terrorists at all. So instead, we will do a probabilistic analysis of the number of passengers screened. Rather than just focusing on the worst-case cost, we will instead focus on the "expected" cost, which corresponds roughly to the average cost over many, many flights. Using our terminology from probability theory here, the experiment in this case will be the choice of which passengers (if any) have bomb-making liquid. And the random variable $$X$$ is defined as the number of passengers screened by the algorithm. In our case, the two possible events are "there is a hijacker within the first 181 passengers" or "there isn't a hijacker in the first 181 passengers", as this is what determines the total number of passengers screened. From the problem definition, the probability that there is no hijacker on the plane at all - and so certainly not one within the first 181 passengers - is at least 999/1000. Therefore the expected cost is at most: $E(\text{passengers screened}) \le \frac{999}{1000}\cdot 181 + \frac{1}{1000}\cdot 200 = 181.019$ In other words, the number of passengers screened on average is less than 182. ## 3.2 Randomized algorithm The basic idea of a randomized algorithm is pretty obvious here: we will randomly select which passengers to screen. In class we looked at a variety of ways to do this random selection. As is often the case (in math, in life, in computer science, ...) the best approach is the simplest one: 1. Select $$n$$ passengers at random 2. Screen these $$n$$ passengers for bomb-making liquid 3. If any of the $$n$$ has bomb liquid, screen all 200 passengers 4. If at least 20 passengers have bomb liquid, don't let the plane fly. Here $$n$$ is some constant that is selected beforehand. The big question is: how large should $$n$$ be? If it's small, then the algorithm will be rather efficient (not many passengers screened), but it will also not be very correct (some planes will blow up). On the other hand, if $$n$$ is very large, the algorithm will be more correct but less efficient. One extreme is the equivalent of the deterministic algorithm: let $$n=181$$. Then there is no chance of failure, so the algorithm is perfectly correct, but not very efficient. The other extreme would be to set $$n=0$$ and not screen any passengers. Then the algorithm is quite efficient, but there is a one-in-a-thousand chance of failure. What we see is something very typical in randomized algorithms: a trade-off between correctness and efficiency. It is easy to view this as a shortcoming of randomized algorithms, since the algorithm will fail with some (small) probability. But in fact it is a feature that we don't get with deterministic algorithms. In Problem 3, you showed that the deterministic algorithm is completely uncompromising: if we improve the efficiency even the smallest amount, the correctness (probability of failure) goes immediately to the max. On the other hand, Problem 2 showed that, by only scanning about one-fourth of the passengers, the probability of failure is less than one in a million. The general theme here is called the principle of many witnesses and we will see it crop up repeatedly in this class. In this scenario, we are taking advantage of the fact that many hijackers must collude in order to blow up the plane: the 20 (or more) hijackers are the "many witnesses". Deterministically, it can be difficult to find a single witness, but with randomization, it becomes much easier. The other principle at work here is called foiling an adversary. The consistency of deterministic algorithms is actually their weakness: they can be fooled by someone who knows the algorithm! On the other hand, if the strategy is random, then while it can be beaten some of the time, there is no way to beat it all of the time. This is a powerful idea behind many randomized algorithms that we will see.
# Efficiency of the prime generating constant $2.920050977316 \dots$ for the purpose of compressing a list of primes. The constant $$c \approx 2.920050977316 \dots$$ has the property that if $$\{a_n\}$$ is the sequence defined by $$a_1 = c, a_{n+1} = \lfloor a_n \rfloor (a_n - \lfloor a_n \rfloor + 1),$$ then $$\lfloor a_n \rfloor$$ is the $$n$$th prime. We need to calculate primes in order to calculate the constant rather than the other way around, so this does not yield a method of generating new primes. However, we might be able to use the constant to compress an already known list of primes. The question then becomes, how many digits of $$c$$ are needed to generate the first $$n$$ primes? Note that it takes $$\sim n \log n$$ digits to store the first $$n$$ primes naively and $$\sim n \log \log n$$ digits by storing the differences between consecutive primes. • Upvoted because that's a cool fact. What's the reference you got it from? (Perhaps its bibliography could answer your question?) Dec 6 '20 at 5:20 • @JakeMirra Added a link to the paper. Dec 6 '20 at 5:21 • While the paper is behind a paywall, it looks to me that every integer sequence with the property that $s_n \leq s_{n+1} < 2s_n$ can be represented by that recurrence on the correct starting value - which would suggest that this is probably not an effective method of compression. (At least no better than storing, says, the differences between consecutive primes) Dec 6 '20 at 5:28 • @MiloBrandt That's correct, you can see the paper for yourself here: arxiv.org/pdf/2010.15882.pdf. Even then, anything less than $n\log \log n$ (linear for example) is interesting. Dec 6 '20 at 5:31 • Dec 6 '20 at 5:50 We can refine the result of the paper as follows, where we will use $$\lfloor x\rfloor$$ for the floor function and $$\{x\}=x-\lfloor x\rfloor$$ to be the fractional part function. Let $$s_0,s_1,s_2,\ldots,s_n$$ be a finite sequence of integers such that $$s_k \leq s_{k+1} < 2s_k$$ for every $$k$$. Let $$I$$ be the set of $$c$$ for which the sequence inductively defined by $$a_1=c$$ and $$a_{n+1}=\lfloor a_n\rfloor(1+\{a_n\})$$ has the property that $$\lfloor a_k\rfloor = s_k$$ for each $$k$$. There is some $$z$$ such that $$I=\left[z, z + \frac{1}{\prod_{i=0}^{n-1}s_i}\right).$$ This tells us that we need to choose the starting value in an interval of length $$\frac{1}{\prod_{i=0}^{n-1}s_i}$$, which generically takes $$\log_{10}\left(\prod_{i=0}^{n-1}s_i\right)$$ digits of precision. However, just writing out all the terms in the sequence would require writing about the same number of digits since $$\log_{10}(\prod s_i) = \sum \log_{10}(s_i)$$. Representing sequences this way doesn't have any advantage at least asymptotically - and this is not surprising, since this method is a fairly literal transcription of sequences satisfying the given inequalities - and this inequality alone is not a very helpful way to single out the sequence of primes. Let's define $$f(x)=\lfloor x\rfloor (1+\{x\})$$. Note that $$f$$ is just a linear function of slope $$n$$ on each interval $$[n,n+1)$$. Define its iterates as $$f^0(x)=x$$ and $$f^{n+1}(x)=f(f^{n}(x))$$. Let's say that a given constant $$c$$ represents a sequence $$s_0,\ldots,s_n$$ if $$\lfloor f^k(x) \rfloor = s_k$$ for each $$k$$. Let me give two inductive proofs of the claimed theorem using this terminology. Proof 1: We prove the statement directly by induction on $$n$$. The statement is trivial if $$n=0$$. Suppose that we wish to represent $$s_0,s_1,s_2,\ldots,s_{n+1}$$. By the inductive hypothesis, the set of representations of $$s_1,s_2,\ldots,s_{n+1}$$ is an interval $$I$$ of the prescribed length. A representation of $$s_0,s_1,s_2,\ldots,s_{n+1}$$ is just some $$x$$ such that $$\lfloor x\rfloor = s_0$$ and $$f(x) \in I$$. The second part of this condition simplifies to saying $$s_0(x-s_0+1) \in I$$, since we know that floor of $$x$$. Note that, since $$I$$ is a subset of $$[s_1,s_1+1) \subseteq [s_0,2s_0)$$, the condition that $$s_0(x-s_0+1) \in I$$ implies $$\lfloor x\rfloor = s_0$$. The set of representations of $$s_0,s_1,s_2,\ldots, s_{n+1}$$ therefore is the preimage of $$I$$ under a linear function of slope $$s_0$$, hence has length $$\frac{1}{s_0}$$ times the length of $$I$$. This immediately proves the desired statement. Proof 2: For this proof, we use an inductive argument that builds up the sequence from the end rather than from the start, which enables somewhat clearer computation of the interval itself. We need to prove an additional statement as part of the inductive hypothesis, however: If $$I$$ is the interval of representations for $$s_0,s_1,s_2,\ldots,s_n$$, then $$f^n$$ is linear when restricted to $$I$$ and has image $$[s_n,s_n+1)$$. Essentially, we need that $$f$$ behaves well when we know what it represents and that it hits every possible terminating value. Clearly, if $$n=0$$, the sequence has only one term and is represented exactly on the interval $$[s_0,s_0+1)$$, on which $$f^0$$ is the identity function. Next, let $$I$$ be the interval of $$x$$ representing the sequence $$s_0,s_1,\ldots,s_{n-1}$$. Note that $$f^{n-1}$$ restricted to $$I$$ is, by hypothesis, some linear function taking $$I$$ to $$[s_{n-1},s_{n-1}+1)$$. However, $$f$$ restricted to the interval $$[s_{n-1},s_{n-1}+1)$$ is also linear, satisfying $$f(x)=s_{n-1}(x-s_{n-1}+1)$$. The composition $$f^n$$ is therefore also linear on the interval $$I$$ - and, by calculation, takes $$I$$ to the interval $$[s_{n-1},2s_{n-1})$$. Note that, by hypothesis, the interval $$[s_n,s_n+1)$$ is contained within $$[s_{n-1},2s_{n-1})$$ and, in fact, is a proportion of exactly $$\frac{1}{s_{n-1}}$$ of that interval, hence its preimage under $$f^n$$ within $$I$$ - which is the set of sequences representing $$s_0,s_1,\ldots,s_{n-1},s_n$$ is the same proportion of $$I$$. In particular, since $$I$$ had length $$\frac{1}{\prod_{i=0}^{n-2}s_i}$$, it must be that this preimage has length $$\frac{1}{\prod_{i=0}^{n-1}s_i}$$, as was desired. Moreover, observe that we have already established the refinement we used for the inductive hypothesis. If we work out the details of the calculation implied here, we end up determining that $$z=s_0+\sum_{k=0}^{n-1} \frac{s_{k+1}-s_k}{\prod_{i=0}^k s_k}$$ which also extends to finding the (unique) $$c$$ that generates an infinite sequences satisfying the condition that $$s_k \leq s_{k+1}<2s_k$$ for every $$k$$ and the additional condition that $$s_{k+1}<2s_k-1$$ for infinitely many $$k$$.
Getting started with the computational analysis of games: Playing "stripped down" poker Theodore L. Turocy University of East Anglia EC'16 Workshop 24 July 2016 In [1]: import gambit Gambit version 16.0.0 is the current development version. You can get it from http://www.gambit-project.org. In [2]: gambit.__version__ Out[2]: '16.0.0' Inspecting a game¶ The game that we will use as our starting point is one which many of you may have encountered in some variation. Myerson's (1991) textbook refers to this as a one-card poker game; Reiley et at (2008) call this "stripped-down poker." There is a deck consisting of two types of cards: Ace and King. There are two players, Alice and Bob. Both start by putting 1 in the pot. One player (Alice) draws a card; initially assume the cards are in equal proportion in the deck. Alice sees her card, and then decides whether she wants to raise (add another 1 to the pot) or fold (and concede the pot to Bob). If she raises, play passes to Bob, who much decide whether to meet her raise (and add another 1 to the pot) or pass (and concede the pot to Alice). If Alice raises and Bob meets, Alice reveals her card: If it is an Ace, she takes the pot, whereas if it is a King, Bob does. Here is what the game looks like in extensive form (as drawn by Gambit's graphical viewer, which we will touch on separately): In [3]: g = gambit.Game.read_game("poker.efg") Gambit's .efg format is a serialisation of an extensive game. The format looks somewhat dated (and indeed it was finalised in 1994), but is fast: recently I loaded a game with about 1M nodes in under 2s. In [4]: g Out[4]: EFG 2 R "A simple poker example" { "Alice" "Bob" } "" c "" 1 "" { "A" 1/2 "K" 1/2 } 0 p "" 1 1 "a" { "R" "F" } 0 p "" 2 1 "b" { "M" "P" } 0 t "" 1 "Alice wins big" { 2, -2 } t "" 2 "Alice wins" { 1, -1 } t "" 3 "Bob wins" { -1, 1 } p "" 1 2 "k" { "R" "F" } 0 p "" 2 1 "b" { "M" "P" } 0 t "" 4 "Bob wins big" { -2, 2 } t "" 2 "Alice wins" { 1, -1 } t "" 3 "Bob wins" { -1, 1 } The game offers a "Pythonic" interface. Most objects in a game can be accessed via iterable collections. In [5]: g.players Out[5]: [<Player [0] 'Alice' in game 'A simple poker example'>, <Player [1] 'Bob' in game 'A simple poker example'>] All objects have an optional text label, which can be used to retrieve it from the collection: In [6]: g.players["Alice"] Out[6]: <Player [0] 'Alice' in game 'A simple poker example'> In this game, Alice has two information sets: when she has drawn the Ace, and when she has drawn the King: In [7]: g.players["Alice"].infosets Out[7]: [<Infoset [0] 'a' for player 'Alice' in game 'A simple poker example'>, <Infoset [1] 'k' for player 'Alice' in game 'A simple poker example'>] The chance or nature player is a special player in the players collection. In [8]: g.players.chance Out[8]: <Player [CHANCE] in game 'A simple poker example'> In [9]: g.players.chance.infosets Out[9]: [<Infoset [0] '' for player '' in game 'A simple poker example'>] Gambit does sorting of the objects in each collection, so indexing collections by integer indices also works reliably if you save and load a game again. In [10]: g.players.chance.infosets[0].actions Out[10]: [<Action [0] 'A' at infoset '' for player '' in game 'A simple poker example'>, <Action [1] 'K' at infoset '' for player '' in game 'A simple poker example'>] We can assign particular game objects to variables for convenient referencing. In this case, we will explore the strategic effects of changing the relative probabilities of the Ace and King cards. In [11]: deal = g.players.chance.infosets[0] In the original version of the game, it was assumed that the Ace and King cards were equally likely to be dealt. In [12]: deal.actions["A"].prob Out[12]: $\frac{1}{2}$ In [13]: deal.actions["K"].prob Out[13]: $\frac{1}{2}$ Computing Nash equilibria¶ Gambit offers a variety of methods for computing Nash equilibria of games, which we will discuss in more detail separately. This is a two-player game in extensive form, for which we can use Lemke's algorithm applied to the sequence form of the game. In the Python interface, solution methods are offered in the gambit.nash module. Each method also is wrapped as a standalone command-line binary. In [14]: result = gambit.nash.lcp_solve(g) The result of this method is a list of (mixed) behaviour profiles. (Future: the return value will be encapsulated in a results class retaining more detailed metadata about the run of the algorithm.) In this game, there is a unique (Bayes-)Nash equilibrium. In [15]: len(result) Out[15]: 1 A behaviour profile looks like a nested list. Entries are of the form profile[player][infoset][action]. In [16]: result[0] Out[16]: $\left[\left[\left[1,0\right],\left[\frac{1}{3},\frac{2}{3}\right]\right],\left[\left[\frac{2}{3},\frac{1}{3}\right]\right]\right]$ In [17]: result[0][g.players["Alice"]] Out[17]: $\left[\left[1,0\right],\left[\frac{1}{3},\frac{2}{3}\right]\right]$ In [18]: result[0][g.players["Bob"]] Out[18]: $\left[\left[\frac{2}{3},\frac{1}{3}\right]\right]$ We can compute various interesting quantities about behaviour profiles. Most interesting is perhaps the payoff to each player; because this is a constant-sum game, this is the value of the game. In [19]: result[0].payoff(g.players["Alice"]) Out[19]: $\frac{1}{3}$ In [20]: result[0].payoff(g.players["Bob"]) Out[20]: $\frac{-1}{3}$ Bob is randomising at his information set, so he must be indifferent between his actions there. We can check this. In [21]: result[0].payoff(g.players["Bob"].infosets[0].actions[0]) Out[21]: $-1$ In [22]: result[0].payoff(g.players["Bob"].infosets[0].actions[1]) Out[22]: $-1$ As we teach our students, the key to understanding this game is that Alice plays so as to manipulate Bob's beliefs about the likelihood she has the Ace. We can examine Bob's beliefs over the nodes (members) of his one information set. Given the structure of the betting rules, Bob becomes indifferent to his actions when he thinks there is a 3/4 chance Alice has the Ace. In [23]: result[0].belief(g.players["Bob"].infosets[0].members[0]) Out[23]: $\frac{3}{4}$ In [24]: result[0].belief(g.players["Bob"].infosets[0].members[1]) Out[24]: $\frac{1}{4}$ Construction of the reduced normal form¶ The call to lcp_solve above uses the sequence form rather than the (reduced) strategic form of the game. This representation takes advantage of the tree structure, and can avoid (in many games of interest) the exponential blowup of the size of the strategic form relative to the extensive form. (More details on this in a while!) Nevertheless, the reduced strategic form of a game can be of interest. Gambit implements transparently the conversions between the extensive and strategic representations. For games in extensive form, the reduced strategic form is computed on-the-fly from the game tree; that is, the full normal form payoff tables are not stored in memory. Each player has a data member strategies which lists the reduced normal form strategies (s)he has. In [25]: g.players["Alice"].strategies Out[25]: [<Strategy [0] '11' for player 'Alice' in game 'A simple poker example'>, <Strategy [1] '12' for player 'Alice' in game 'A simple poker example'>, <Strategy [2] '21' for player 'Alice' in game 'A simple poker example'>, <Strategy [3] '22' for player 'Alice' in game 'A simple poker example'>] In [26]: g.players["Bob"].strategies Out[26]: [<Strategy [0] '1' for player 'Bob' in game 'A simple poker example'>, <Strategy [1] '2' for player 'Bob' in game 'A simple poker example'>] We can also do a quick visualisation of the payoff matrix of the game using the built-in HTML output (plus Jupyter's inline rendering of HTML!) Disclaimer: There's a bug in the 16.0.0 release which prevents the correct generation of HTML; this will be corrected in 16.0.1 (and is corrected in the 'master' branch of the git repository already). In [27]: import IPython.display; IPython.display.HTML(g.write('html')) Out[27]: A simple poker example 1 2 11 0,0 1,-1 12 1/2,-1/2 0,0 21 -3/2,3/2 0,0 22 -1,1 -1,1 Bonus note: Gambit also supports writing out games using Martin Osborne's sgame LaTeX style: https://www.economics.utoronto.ca/osborne/latex/. This doesn't have auto-rendering magic in Jupyter, but it's all ready to cut-and-paste to your favourite editor. In [28]: print g.write('sgame') \begin{game}{4}{2}[Alice][Bob] &1 & 2\\ 11 & $0,0$ & $1,-1$ \\ 12 & $1/2,-1/2$ & $0,0$ \\ 21 & $-3/2,3/2$ & $0,0$ \\ 22 & $-1,1$ & $-1,1$ \end{game} We can convert our behaviour profile to a corresponding mixed strategy profile. This is indexable as a nested list with elements [player][strategy]. In [29]: msp = result[0].as_strategy() msp Out[29]: $\left[\left[\frac{1}{3},\frac{2}{3},0,0\right],\left[\frac{2}{3},\frac{1}{3}\right]\right]$ Of course, Alice will receive the same expected payoff from this mixed strategy profile as she would in the original behaviour profile. In [30]: msp.payoff(g.players["Alice"]) Out[30]: $\frac{1}{3}$ We can also ask what the expected payoffs to each of the strategies are. Alice's last two strategies correspond to folding when she has the Ace, which is dominated. In [31]: msp.strategy_values(g.players["Alice"]) Out[31]: [Fraction(1, 3), Fraction(1, 3), Fraction(-1, 1), Fraction(-1, 1)] Automating/scripting analysis¶ The real gain in having libraries for doing computation in game theory is to be able to script computations. For example, we can explore how the solution to the game changes, as we change the probability that Alice is dealt the Ace. Payoffs and probabilities are represented in games in Gambit as exact-precision numbers, which can be either rational numbers of (exact-precision) decimals. These are called gambit.Rational and gambit.Decimal, and are compatible with the Python fractions.Fraction and decimal.Decimal classes, respectively. (In Gambit 16.0.0, they are derived from them.) Caveat/Tip: This means one cannot set a payoff or probability to be a floating-point number. We justify this based on the principle "explicit is better than implicit." In two-player games, the extreme points of the set of Nash equilibria are rational numbers, whenever the data of the game are rational, and the Gambit equilibrium computation methods take advantage of this. If the payoff of a game were specified as a floating-point number, e.g. 0.333333 instead of 1/3, surprising results can occur due to rounding. In [32]: import pandas probs = [ gambit.Rational(i, 20) for i in xrange(1, 20) ] results = [ ] for prob in probs: g.players.chance.infosets[0].actions[0].prob = prob g.players.chance.infosets[0].actions[1].prob = 1-prob result = gambit.nash.lcp_solve(g)[0] results.append({ "prob": prob, "alice_payoff": result.payoff(g.players["Alice"]), "bluff": result[g.players["Alice"].infosets[1].actions[0]], "belief": result.belief(g.players["Bob"].infosets[0].members[1]) }) df = pandas.DataFrame(results) df Out[32]: alice_payoff belief bluff prob 0 -13/15 1/4 1/57 1/20 1 -11/15 1/4 1/27 1/10 2 -3/5 1/4 1/17 3/20 3 -7/15 1/4 1/12 1/5 4 -1/3 1/4 1/9 1/4 5 -1/5 1/4 1/7 3/10 6 -1/15 1/4 7/39 7/20 7 1/15 1/4 2/9 2/5 8 1/5 1/4 3/11 9/20 9 1/3 1/4 1/3 1/2 10 7/15 1/4 11/27 11/20 11 3/5 1/4 1/2 3/5 12 11/15 1/4 13/21 13/20 13 13/15 1/4 7/9 7/10 14 1 1/4 1 3/4 15 1 1/5 1 4/5 16 1 3/20 1 17/20 17 1 1/10 1 9/10 18 1 1/20 1 19/20 In [33]: import pylab %matplotlib inline pylab.plot(df.prob, df.bluff, '-') pylab.xlabel("Probability Alice gets ace") pylab.ylabel("Probability Alice bluffs with king") pylab.show() In [34]: pylab.plot(df.prob, df.alice_payoff, '-') pylab.xlabel("Probability Alice gets ace") pylab.ylabel("Alice's equilibrium payoff") pylab.show() In [35]: pylab.plot(df.prob, df.belief, '-') pylab.xlabel("Probability Alice gets ace") pylab.ylabel("Bob's equilibrium belief") pylab.ylim(0,1) pylab.show() As a final experiment, we can also change the payoff structure instead of the probability of the high card. How would the equilibrium change if a Raise/Meet required putting 2 into the pot instead of 1? In [36]: deal.actions[0].prob = gambit.Rational(1,2) In [37]: deal.actions[1].prob = gambit.Rational(1,2) The outcomes member of the game lists all of the outcomes. An outcome can appear at multiple nodes. Outcomes, like all other objects, can be given text labels for easy reference. In [38]: g.outcomes["Alice wins big"] Out[38]: <Outcome [0] 'Alice wins big' in game 'A simple poker example'> In [39]: g.outcomes["Alice wins big"][0] = 3 In [40]: g.outcomes["Alice wins big"][1] = -3 In [41]: g.outcomes["Bob wins big"][0] = -3 In [42]: g.outcomes["Bob wins big"][1] = 3 Once again, solve the revised game using Lemke's algorithm on the sequence form. In [43]: result = gambit.nash.lcp_solve(g) In [44]: len(result) Out[44]: 1 In [45]: result[0] Out[45]: $\left[\left[\left[1,0\right],\left[\frac{1}{2},\frac{1}{2}\right]\right],\left[\left[\frac{1}{2},\frac{1}{2}\right]\right]\right]$ The value of the game to Alice is now higher: 1/2 instead of 1/3 with the original payoffs. In [46]: result[0].payoff(g.players["Alice"]) Out[46]: $\frac{1}{2}$ Bob's equilibrium belief about Alice's hand is also different of course, as he now is indifferent between meeting and passing Alice's raise when he thinks the chance she has the Ace is 2/3 (instead of 3/4 before). In [47]: result[0].belief(g.players["Bob"].infosets[0].members[0]) Out[47]: $\frac{2}{3}$ Serialising the game in other formats¶ We already saw above some of the formats that can be used to serialise games. There are a few other standard options. For example, Gambit also has a format for games in strategic (or normal) form. You can get the reduced normal form of the extensive game in this format directly: In [48]: print g.write('nfg') NFG 1 R "A simple poker example" { "Alice" "Bob" } { { "11" "12" "21" "22" } { "1" "2" } } "" 0 0 1 -1 -2 2 -1 1 1 -1 0 0 0 0 -1 1 Also, we can write the game out in the XML format used by Game Theory Explorer: In [49]: print g.write('gte') <gte version="0.1"> <gameDescription/> <display> <color player="1">#FF0000</color> <color player="2">#0000FF</color> <font>Times</font> <strokeWidth>1</strokeWidth> <nodeDiameter>7</nodeDiameter> <isetDiameter>25</isetDiameter> <levelDistance>75</levelDistance> </display> <players> <player playerId="1">Alice</player> <player playerId="2">Bob</player> </players> <extensiveForm> <node> <node player="Alice" prob="1/2" move="A"> <node iset="b" player="Bob" move="R"> <outcome move="M"> <payoff player="Alice">3</payoff> <payoff player="Bob">-3</payoff> </outcome> <outcome move="P"> <payoff player="Alice">1</payoff> <payoff player="Bob">-1</payoff> </outcome> </node> <outcome move="F"> <payoff player="Alice">-1</payoff> <payoff player="Bob">1</payoff> </outcome> </node> <node player="Alice" prob="1/2" move="K"> <node iset="b" player="Bob" move="R"> <outcome move="M"> <payoff player="Alice">-3</payoff> <payoff player="Bob">3</payoff> </outcome> <outcome move="P"> <payoff player="Alice">1</payoff> <payoff player="Bob">-1</payoff> </outcome> </node> <outcome move="F"> <payoff player="Alice">-1</payoff> <payoff player="Bob">1</payoff> </outcome> </node> </node> </extensiveForm> </gte>
# How do you sketch the angle (11pi)/6 in standard position? it's in the standard form. When we are in interval: $\left[0 , 2 \pi\right]$ if the interval stands $\left[- \pi , \pi\right]$ then $\frac{11 \pi}{6} = - \frac{\pi}{6}$ $\textcolor{b l u e}{- \frac{\pi}{6}}$ and $\textcolor{red}{\frac{11 \pi}{6}}$
Question (a) Show that a 30,000-line-per-centimeter grating will not produce a maximum for visible light. (b) What is the longest wavelength for which it does produce a first-order maximum? (c) What is the greatest number of lines per centimeter a diffraction grating can have and produce a complete second- order spectrum for visible light? 1. undefined 2. $333 \textrm{ nm}$ 3. $7142 \textrm{ lines/cm}$ Solution Video
# How to sort the “do” output? So I have function (FindCoefficients) that iterates through some values to find the maximum value for a function. It is much quicker to hold the non-linear values constant and do a find maximum for the linear coeffiecients then to do a non linear optimization for a complicated function. This is the main line that I'm struggling with: Do[Print[a , b, FindCoefficients[a, b]], {a, 1, 10, 1}, {b, 1, 10, 1}] This gives me the output of 11{value of maximum, other info} 12{value of maximum, other info} 13{value of maximum, other info} ... 21{value of maximum, other info} ... 1010{value of maximum, other info} Is there any way to sort my output from greatest to smallest, so I can easily pick out the maximum? SortBy[ Table[{a , b, FindCoefficients[a, b]} , {a, 1, 10, 1}, {b, 1, 10, 1}],{3}]
# Chapter 1 Atomic Structure¶ ## Example 1_1 pgno:6¶ In [12]: print'The Atomic weight of 12.0111 for Natural Carbon shows that the 12C nuclide must be present to a larger extent.' print'\nLet 100 atoms of natural carbon contain x atoms of 12C nuclide.\n' X=(13.0034-12.0111)*100/(13.0034-12.0000);#percentage of 12C in natural carbon# print'Percentage of 12C in Natural carbon=X=',round(X,3) Y=100-X;#percentage of 13C in natural carbon# print'\nPercentage of 13C in Natural carbon=',round(Y,3) The Atomic weight of 12.0111 for Natural Carbon shows that the 12C nuclide must be present to a larger extent. Let 100 atoms of natural carbon contain x atoms of 12C nuclide. Percentage of 12C in Natural carbon=X= 98.894 Percentage of 13C in Natural carbon= 1.106 ## Example 1_2 pgno:9¶ In [13]: C=3*10**2;#velocity of light in megametre/sec# v=C*10**-6/L;#frequency of radiation in Teracycles per sec# v1=1/L;#wave number of radiation in per meter# Frequency of radiation=v=Teracycles per sec=1.0*10**15Hz 1000.0 ## Example 1_3 pgno:12¶ In [14]: V=0.85;#external voltage in volts# e=1.6*10**-19;#electron charge in coloumbs# m=9.1*10**-28;#electron mass in grams# v=(2*V*e*10/m)**0.5;#velocity of electron in motion in Kilocm per sec# print'velocity of electron in motion=v= Kilocm per sec=5.47*10**7cm per sec',round(v,3) W=(3.198*10**-12)/(1.6*10**-12);#Threshold energy in eV# print'\nThreshold energy of electron=W=eV',round(W,3) v0=(3.198*10**-12)/(6.625*10**-15);#Threshold frequency in tera per sec# print'\nThreshold frequency=v0=Tera per sec=4.83*10**14per sec',round(v0,3) velocity of electron in motion=v= Kilocm per sec=5.47*10**7cm per sec 54671.848 Threshold energy of electron=W=eV 1.999 Threshold frequency=v0=Tera per sec=4.83*10**14per sec 482.717 ## Example 1_4 pgno:13¶ In [15]: E=118.5*10**3*4.2*10**7;#energy of ions in ergs# C=3*10**10;#velocity of light in cm/sec# h=6.625*10**-27;#plank's constant# l=(L*h*C*10**8)/E; print'wavelength required to cause ionization=l=Angstrums',round(l,3) wavelength required to cause ionization=l=Angstrums 2405.206 ## Example 1_5 pgno:17¶ In [16]: n1=2.; n2=4.; dE=21.7*(10**-12)*((1/n1**2)-(1/n2**2)); h=6.625*10**-27;#plank's constant# C=3*10**10;#velocity of light in cm/sec# l=h*C*10**8/dE;#Wavelength of second line in balmer series in Angstrums# print'wavelength of the second line in balmer series=l=Angstrums',round(l,3)#here the answer given in textbook is slightly wrong the original answer should be the one comes through execution# wavelength of the second line in balmer series=l=Angstrums 4884.793 ## Example 1_6 pgno:18¶ In [17]: n1=1; dE=21.7*(10**-12)/(1.6*10**-12*n1**2);#energy required to promote an electron from ground to infinity in eV# print'Ionisation potential for an electron=dE=eV',round(dE,3) Ionisation potential for an electron=dE=eV 13.563 ## Example 1_7 pgno:20¶ In [18]: h=6.625*10**-27;#plank's constant# V=2*10**3;#velocity of Cricket Ball in cm/sec# m=170;#weight of Cricket Ball in grams# l=h/(m*V);#DeBroglie Wavelength of CricketBall in Angstrums# print'DeBroglie Wavelength of CricketBall=l==1.95*10**-24Angstrums',l DeBroglie Wavelength of CricketBall=l==1.95*10**-24Angstrums 1.94852941176e-32 ## Example 1_8 pgno:21¶ In [19]: from math import pi,sqrt r2=4*r1;#Bohr radius in second state in cm# h=6.625*10**-27;#plank's constant# m=9.11*10**-28;#electron mass in grams# v2=h/(pi*m*r2);#electron velocity in second state in cm per sec# print'\nElectron velocity in second state=v2=cm per sec',v2 l=(h*10**8)/(m*v2);#De Broglie wavelength of electron in second state in Angstrums# print'\nDe Broglie wavelength of electron in second state=l=Angstrums',l e=1.6*10**-12;#electron charge in ergs# v=sqrt((2*(10**4)*e)/m);#velocity of the moving electron in second state in cm/sec# print'\nVelocity of moving electron in second state=v=cm per sec',v l1=(h*10**8)/(v*m);#De Broglie wavelength of moving elctron in Angstrums# print'\nDe Broglie wavelength of moving electron in secondstate=l1=Angstrums',round(l1,4) Bohr radius in second state=r2=2.12*10**-8cm Electron velocity in second state=v2=cm per sec 109189724.953 De Broglie wavelength of electron in second state=l=Angstrums 6.66017642561 Velocity of moving electron in second state=v=cm per sec 5926738977.44 De Broglie wavelength of moving electron in secondstate=l1=Angstrums 0.1227 ## Example 1_9 pgno:21¶ In [20]: from math import pi m=9.11*10**-28;#electron mass in grams# v=1.1*10**8;#velocity of electron in cm per sec# p=m*v;#momentum of electron in gram cm per sec# print'momentum of electron=p=10.01*10**-20gram cm per sec' dp=p*10**-2;#Uncertainity in momentum in gram cm per sec# print'\nUncertainity in momentum=10.01*10**-22gram cm per sec' h=6.625*10**-27;#plank's constant# dx=(h*10**8)/(4*pi*dp);#Uncertainity in position in Angstrum# print'\nUncertainity in position=dx=Angstrum',round(dx,2) momentum of electron=p=10.01*10**-20gram cm per sec Uncertainity in momentum=10.01*10**-22gram cm per sec Uncertainity in position=dx=Angstrum 52.61 ## Example 1_10 pgno:24¶ In [21]: h=6.625*10**-27;#plank's constant# g=10**3;#particle mass in grams# l1=1.;#length of one dimensional box in cm# n1=1.; n2=2.; dE1=((n2**2-n1**2)*h**2)/(8*g*l1**2);#Energy difference between two energy levels of particle in eV# print'Energy difference between two energy levels of particle=dE1=1*10**-44eV' l2=2*10**-8;#length of one dimensional box in cm# m=9.11*10**-28;#electron mass in grams# dE2=((n2**2-n1**2)*h**2)/(8*m*l2**2*1.6*10**-11);#Energy difference between two energy levels of electron in eV# print'\nEnergy difference between two energy levels of electron=dE2=eV',round(dE2,3) Energy difference between two energy levels of particle=dE1=1*10**-44eV Energy difference between two energy levels of electron=dE2=eV 2.823
Unlimited Plugins, WordPress themes, videos & courses! Unlimited asset downloads! From \$16.50/m # Quick Basix: Random Animated Blinking Difficulty:BeginnerLength:ShortLanguages: Even when you're animating on the timeline a touch of ActionScript can enhance what you're doing. In this Quick Tip we're going to use a single line of AS3 to add animated realism in the blink of an eye! ## Step 1: Open Your Eyes Grab the source files and open "basis.fla". On the stage you'll find the "head" movieclip, which comprises two layers containing the movieclips "face" and "eyes". Of course, if you want to start from scratch, using your own graphics, you're welcome to do so.. ## Step 2: The Eyes Have it We're going to make the eyes blink periodically, so begin by double-clicking the "eyes" movieclip to enter its timeline. ## Step 3: Blinkered View Lengthen the timeline by adding a keyframe at frame 80 on the "eyes" layer. This is where we're going to place the eyes in their "blinked" state. Delete the open eyes from the stage and turn your attention to the library. In there, you'll find "eyesClosed" which you can position on stage where the "eyesOpen" movieclip previously was. With your eyesClosed selected, hit F5 to add a few more frames. Add as many as you want; doing so will increase the time your character's eyes spend closed during any given blink. 3 frames is fine in our case. Test your movie (Command/Ctrl + Enter) to get an idea of the blinking effect you've created. The playhead moves along the timeline causing periodical blinking. Perfect! Right? Well not exactly. The uniform blinking would suggest that our character is either a robot or missing his frontal lobe. ## Step 4: Randomeyes Let's improve the effect by randomising the blinking. Add a second layer to the "eyes" movieclip, label it "actions" and lock it. Select the first frame and enter the following snippet in the actions panel (Window > Actions): ## Step 5: Eye Examination What does this snippet actually do? Well, the gotoAndPlay(); action sends the playhead along the current timeline, to whichever frame number we define within the braces. The contents within our braces will give us our frame number. The Math.random() method will return a number between 0 and (though not including) 1. This is multiplied by totalFrames, a property of our movieclip - the amount of frames within it (in our case 83). uint() neatens up the outcome of our random number * totalFrames, by rounding down and giving us an integer. The lowest integer we can expect is 0, since uint(0*83) is 0. The highest integer we can hope for is 82, since uint(0.9999999999*83) is 82. Therefore, we +1 to finish things off, giving us a destination frame somewhere between 1 and 83. Once the playhead reaches the end of our timeline, it returns to frame 1 and is once again sent to a random frame. Check the movie again! Our character is still blinking, but now at irregular intervals, which looks far less lobotomised. The effect becomes even clearer with two instances of our character on stage. In the example below, we have two different characters, but both make use of exactly the same "eyes" movieclip: I'm not saying these two look like they've got it totally together, but you get the idea.. ## Improved Eyesight This is a really simple end result, why not see if you can improve it? • Have a play with the timing; alter the framerate and number of frames within your blinking movie. • Why not try and alter the snippet to prevent the playhead from jumping to within the blinking action itself? • Perhaps you could even prevent the animation from blinking too rapidly in succession? ## Conclusion You've finished! This is a straight-forward and commonly used technique, but moving the playhead to random frames can be applied in thousands of situations. I hope you find use for it :)
# Estimating the rental rate of capital from data Take the classic optimization problem of the neo-classical firm: $$\begin{array}{*2{>{\displaystyle}r}} \mbox{maximize (over K, L)} & f(K, L) - RK - WL \end{array}$$ The first order condition equates the marginal product of capital with the rental rate of capital $R$. Which raises the question... ### How do macroeconomists typically estimate a time series for the rental rate of capital using U.S. data? (Disclaimer: I don't do macro, but my curiosity has been sparked.) One approach is to use financial market data to get $R_t$, use another first order condition that the rental rate of capital equals the nominal interest rate plus depreciation (i.e. Hall and Jorgensen). Backing a rental rate out of financial market data is not obvious though! In financial markets, prices vary based upon risk and time. • time dimension: Long-term rates are typically higher than short-term rates. When macro-economists and macro-models talk about the rental rate of capital, what's the time frame? • risk dimension: Eg. callable bonds have high yields than non-callable bonds. Debt and equity may have different expected returns based upon risk. • inflation dimension: A fixed nominal rate is a stochastic real rate depending on realized inflation, and the expected real rate is the nominal rate minus inflation expectations. The real rental rate would add back inflation expectations. A completely different direction is taken by Casey Mulligan where he sticks entirely with Bureau of Economic Analysis (BEA) data. I don't follow this literature, and I don't have a sense of the range of approaches that are considered sensible in modern, empirical macro. • Really late but why not have $MPK=r$? – EconJohn Feb 8 '18 at 21:41 • @EconJohn Could you expand what you mean? – Matthew Gunn Feb 8 '18 at 22:05 • Well, considering classic marginal productivity theory of wages which states the relationship of $\text{MPL}=\text{w}$, a simple extention can be drawn to capital rental rate where $\text{MPK}=\text{r}$. – EconJohn Feb 9 '18 at 3:51 • @EconJohn So then to measure marginal productivity of capital, you have to go through an exercise like this? – Matthew Gunn Feb 9 '18 at 4:26 • I'd say so. This paper is excellent, however they don't relate MPK to the price of capital- I'm suggesting (in a lack of an actual rental data) that solving where $\text{MPK=r}$ is not a bad idea. the formula of having $MPK=\alpha \frac{Y}{K}$ seems appealing. – EconJohn Feb 9 '18 at 17:55 Time series on rental price of capital can be estimated using $$r=\frac{P_k}{P}(i-inf+\delta)$$ here, $$P_k$$ is the price of capital goods (price index for capital goods), $$P$$ is a deflator, $$i$$ is nominal interest rate, $$inf$$ is an inflation rate and $$\delta$$ is depreciation rate of physical capital stock.
Band edge filter for raised cosine impulse I'm hoping to use an implementation of a frequency-locked loop for rough frequency synchronization in a PSK31 demodulator. The approach is to define a filter that is the derivative of the matched filter in the frequency domain. A couple slides from fred harris illustrate the idea: I've found implementations for the typical case of a root-raised-cosine pulse shaping filter. However, PSK31 is a bit "special" in that it uses a raised cosine impulse. AKA, the Hann function: $$h(t) = {1 \over 2}\, (1+\cos(\pi t))\, \Pi(t/2)$$ Where $\Pi$ is the rectangle function. How can I calculate the appropriate band-edge filter in this case? • The derivative of the the Fourier transform of $h(t)$ should result in something like $i\omega h(t)$. Look up the derivative theorems of the Fourier transform you're using. Also, I don't understand the $t/t$ argument of the rectangle function, as how it differs from 1. – Andy Walls Jul 6 '17 at 1:40 • @AndyWalls Sorry, the t/t thing was a typo. Fixed. – Phil Frost Jul 6 '17 at 1:45 • Oops that should probably be $-ith(t)$. – Andy Walls Jul 6 '17 at 1:49 It looks like the filter you want is indeed $-ith(t)$. Here is some Octave code to get a visualization in the frequency domain: t = [-1:0.01:1]; h = 0.5*(1+cos(pi*t)); hd = -i*t*0.5.*(1+cos(pi*t)); H=fftshift(fft(h,512)); HD=fftshift(fft(hd,512)); v = [-256:255]; plot(v, 20*log10(abs(H/512)), v, 20*log10(abs(HD/512))) BTW, when plotting $h(t)$ in the time domain, your provided $h(t)$ appears to be normalized, spanning from $t = -1$ to $t = 1$. So, I'm guessing your normalized symbol period is $T = 1$ with some ISI, or $T = 2$ with no ISI. • Wow, that's profoundly simple! I'm doing some work to validate, then I'll accept. Regarding ISI, it's the case with PSK31 that the transmitted waveform has zero ISI, but then of course after matched filtering in the receiver there is again ISI. I'm not sure why the designer of this modem didn't just use a RRC filter. So I think with $T=1$, the $h(t)$ from the question should be correct, because the magnitude is zero at the previous and next symbol: $t(-1) = t(1) = 0$, right? – Phil Frost Jul 6 '17 at 14:42 • So, yeah, the pulse filter meets the Nyquist criterion for being 0 at the optimal sampling point of the next symbol and previous symbol. So no ISI until the receiver filters start introducing it. – Andy Walls Jul 6 '17 at 14:51 • BTW, the derivative property was taken from #107 in the table here: en.wikipedia.org/wiki/… . You may need to consider #104 in conjunction with #107 when you scale the time axis to actual symbol period durations. The devil is in the details. :) – Andy Walls Jul 6 '17 at 14:54 • Regarding the simplicity: yeah, the Fourier ( and Laplace) transform turns calculus in one domain, into algebra in the other. That's why we use the Laplace transform to solve basic passive circuit problems in EE10X classes. $V = I sL$ is easier to handle than $v= L \dfrac {di}{dt}$. :) – Andy Walls Jul 6 '17 at 22:28
### Counting Factors Is there an efficient way to work out how many factors a large number has? ### Repeaters Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13. ### Oh! Hidden Inside? Find the number which has 8 divisors, such that the product of the divisors is 331776. # Indivisible ##### Stage: 3 Short Challenge Level: See all short problems arranged by curriculum topic in the short problems collection Some students (fewer than $100$) are having trouble lining up  for a school production. When they line up in $3$s, two people are left over. When they line up in $4$s, three people are left over. When they line up in $5$s, four people are left over. When they line up in $6$s, five people are left over. How many students are there in the group? If you liked this problem, here is an NRICH task which challenges you to use similar mathematical ideas. This problem is taken from the UKMT Mathematical Challenges.
On $C(K)$ spaces embeddable into the Banach space $c_0$ Problem 1. Characterize compact Hausdorff spaces $$K$$ for which the Banach space $$C(K)$$ of continuous real-valued functions embeds into the Banach space $$c_0$$. Since $$c_0$$ has separable dual, such $$K$$ must me countable. So, we can make Problem 1 more precise: Problem 2. Is it true that for every compact countable space $$K$$ the Banach space $$C(K)$$ is isomorphic to a subspace of $$c_0$$? Another possible option: Problem 3. Let $$K$$ be a compact Hausdorff space. Is it true that the Banach space $$C(K)$$ is isomorphic to $$c_0$$ if $$C(K)$$ is isomorphic to a subspace of $$c_0$$? • Problem 3: $K$ finite makes trivial counterexamples. Surprisingly (?) the answer is yes for $K$ infinite according to Tomek Kania's answer. – YCor Jun 3 '19 at 6:32 1 Answer The Szlenk index is the answer. A space $$C(K)$$, where $$K$$ is infinite compact Hausdorff space, is embeddable into $$c_0$$ if and only if $$K$$ is homeomorphic to an ordinal below $$\omega^\omega$$ and if this is the case (and $$K$$ is infinite) the space itself is isomorphic to $$c_0$$. So the answer to problem 2 is no however the answer to problem 3 is yes. For details see Rosenthal's chapter in the Handbook of Banach spaces. • Thank you very much for the answer. This is exactly what I need! – Taras Banakh Jun 3 '19 at 5:27 • Rosenthal, Haskell P. The Banach spaces C(K). Handbook of the geometry of Banach spaces, Vol. 2, 1547-1602, North-Holland, Amsterdam, 2003. – YCor Jun 3 '19 at 6:23
# Math Help - Determing the derivative of a function using the chain rule 1. ## Determing the derivative of a function using the chain rule How do I get from the problem to the answer using the chain rule? It would be much appreciated if someone indicated the steps for me, thank you! 2. Originally Posted by !!! How do I get from the problem to the answer using the chain rule? It would be much appreciated if someone indicated the steps for me, thank you! To differentiate the product you use the product rule. As part of that process you need to differentiate $(x^2 + 2)^{1/3}$. Get that derivative using the chain rule. 3. Only one comment to add: you recently posted the same problem on MathLinks. They don't actually reply as faster as we can, so I suggest you to stay here 'cause you'll get a quickly answer and full solutions. (This last thing depends of your problem.) 4. Alright, thanks for the help and heads-up, guys. I got through the part where I already performed the product rule and chain rule, but so far, my answer looks like this: How do I get from that to the correct answer of ? 5. Originally Posted by !!! Alright, thanks for the help and heads-up, guys. I got through the part where I already performed the product rule and chain rule, but so far, my answer looks like this: How do I get from that to the correct answer of ? Note that $(x^2 + 2)^{-2/3} = (x^2 + 2)^{1/3} (x^2 + 2)^{-1} = \frac{(x^2 + 2)^{1/3}}{(x^2 + 2)}$. Make that substitution and then factorise by taking out the common factor of $2x (x^2 + 2)^{1/3}$ in each term. 6. Originally Posted by !!! Alright, thanks for the help and heads-up, guys. I got through the part where I already performed the product rule and chain rule, but so far, my answer looks like this: How do I get from that to the correct answer of ? $y'=2x(x^2+2)^\frac{1}{3}+(x^2+1)\cdot\frac{1}{3}(x ^2+2)^\frac{-2}{3}(2x)$ rewriting the second term with a positive exponent gives $y'=2x(x^2+2)^\frac{1}{3}+(2x)(x^2+1)\cdot\frac{1}{ 3(x^2+2)^\frac{2}{3}}$ multiplying the numerator and denominator of the second term [tex]MATH] simplifying $y'=2x(x^2+2)^\frac{1}{3}+(2x)(x^2+1)\frac{(x^2+2)^ \frac{1}{3}}{3(x^2+2)}$ factoring out the GCF gives $y'=2x(x^2+2)^\frac{1}{3}\left[1+\frac{x^2+1}{3(x^2+2)}\right]$ 7. Thank you very much, everyone. Now I understand how to do that problem. I also have trouble determining the derivatives of these and I'm not sure if I'm doing them correctly because the book don't have their answers. $1. y = x sin x^{1/2}$ $y' = x cos (x^{1/2}) d/dx (x^{1/2}) + (sin x^{1/2}) d/dx (x^{1/2})$ $y' = x cos \sqrt{x} + sin \sqrt{x} / {2x \sqrt{x}}$ 2. $y = x/(7-3x^{1/2})$ I only got up to $(7-3x)^{1/2} (-3) - x[(1/2)(7-3x^{1/2})] (-3)$ 8. Originally Posted by !!! Thank you very much, everyone. Now I understand how to do that problem. I also have trouble determining the derivatives of these and I'm not sure if I'm doing them correctly because the book don't have their answers. $1. y = x sin x^{1/2}$ $y' = x cos (x^{1/2}) d/dx (x^{1/2}) + (sin x^{1/2}) d/dx (x^{1/2})$ $y' = x cos \sqrt{x} + sin \sqrt{x} / {2x \sqrt{x}}$ 2. $y = x/(7-3x^{1/2})$ I only got up to $(7-3x)^{1/2} (-3) - x[(1/2)(7-3x^{1/2})] (-3)$ 1. You need the product rule: $u = x \Rightarrow \frac{du}{dx} = 1$ $v = \sin x^{1/2} \Rightarrow \frac{dv}{dx} = \cos x^{1/2} \times \frac{1}{2} x^{-1/2} = \cos \sqrt{x} \times \frac{1}{2\sqrt{x}}$, where the chain rule has also been used. Then you substitute the above results into $\frac{dy}{dx} = u \frac{dv}{dx} + v \frac{du}{dx}$. Some simplification of the resulting answer will be possible ..... ---------------------------------------------------------------------------------------------- 2. You need the quotient rule: $u = x \Rightarrow \frac{du}{dx} = 1$ $v = 7 - 3 x^{1/2} \Rightarrow \frac{dv}{dx} = -\frac{3}{2} x^{-1/2} = -\frac{3}{2\sqrt{x}}$. Then you substitute the above results into $\frac{dy}{dx} = \frac{v \frac{du}{dx} - u \frac{dv}{dx}}{v^2}$. Some simplification (requiring a little bit of algebra) of the resulting answer will be desirable .....
## anonymous 5 years ago The line of symmetry of the parabola whose equation is y = ax^2 - 4x + 3 is x = -2. What is the value of "a"? a) -2 b) -1 c) -1/2 The line of symmetry occurs at that line of x where the parabola is a minimum (or maximum). The x-value at which a parabola is minimal (or maximal) is $x=\frac{-b}{2a}$for a parabola with equation, $y=ax^2+bx+c$ You have the following,$x=-2, b=-4$Solving for a gives you$a=\frac{-b}{2x}=\frac{-(-4)}{2(-2)}=-1$which is answer (b).
# 🤔 Image Processing¶ Whether you are into Facebook, Instagram, or Snapchat you are probably familiar with all kinds of image filters and manipulations. It turns out that lots of the filters you can apply to images are pretty fun to code. Hopefully you have read the Nested Iteration and Image Processing section to get yourself ready for this project. In any case you will probably want to open up that section in another tab so you can refer to the image module functions that are provided. You can use the following image using the name “golden_gate.png” or you can use any image you choose by using the full URL to the image. For example: http://reputablejournal.com/images/ComputerHistory/TeleType.png will use an automatically reduced size image of a picture the author took in the computer history museum. ## Basic Filters¶ To start, we’ll try some pixel by pixel filters. We have a few for you to try here, but feel free to experiment on your own as well. You really can’t go wrong here so let your imagination run wild. The first thing to try is to create a grayscale version of a color image. Grayscale is not quite black and white where each pixel would be “all on” or “all off” but rather a grayscale image is one in which the the red, green, and blue components of each pixel are all the same and in the range from 0 to 255. Your first task is to figure out how to turn RGB values for each pixel which are likely different into three values for RGB that are the same. There are several different ways you could do this, and there’s not really a clear right or wrong, so just try the first thing that occurs to you and it will probably look pretty good. Now that you have a grayscale image, try turning it into a black and white image by setting a threshold value for your gray_value. That is, if gray_value is less than your threshold make r,g,b all 0. If its more make r,g,b all 255. Here is another pretty standard filter for photos called “Sepia tone” It will remind you of the old-west photographer style images. The formula to convert a photo into sepia tone is as follows: newR = (R × 0.393 + G × 0.769 + B × 0.189) newG = (R × 0.349 + G × 0.686 + B × 0.168) newB = (R × 0.272 + G × 0.534 + B × 0.131) And finally here’s an activecode place for you to go wild. Try making everything neon. Take away all of the green, double the blue. whatever you can think of. If you find something cool you can come back to it and try it on some different images. ## Rotating, Scaling and Blending¶ In addition to filters, another really common thing to do with images is to crop, resize, and rotate them. We will start with rotating , moving on to resizing and then we will apply the cropping operation to combine multiple images into one by taking parts of two or more images and adding them into a final image. Note that for this group of exercises we will not change the original image in place. Instead we’ll make a new Empty image and move the pixels from the original image into the appropriate place in the new image. Lets start by rotating an image by 90 degrees in the clockwise direction. This is an easy one to get wrong as your initial thought might be to just take a pixel from position x, y and put it at position y, x. In fact this is easy to try so you should do that first to see why it is not quite correct. To get this one right you might want to work through a small example to understand the correct pattern. Thats good problem solving practice and really helps in this case. Now lets make an image larger. We’ll begin by enlarging the image by the same amount in both the width and the height. This preserves a property of the image known as its aspect ratio. You should think about this before you start as how you solve this particular problem will make a huge difference in the complexity of your code. If get this one mastered then think about how you might enlarge the image by different factors in height and width. You can make yourself look tall and thin (with an odd shaped head). This is optional so if you decide to do it you can write it as an enhancement to the code in the activecode window above. If you tried to enlarge an image really big you would notice that it starts to look like an 80’s vintage video game. That is the image will get really blocky. Later on in this project we’ll introduce the idea of smoothing an image which can soften this blocky effect. Once you have conquered enlarging an image its time to take on reducing an image. The key to this is to start simple. Don’t try to invent the perfect solution to this problem before you solve a simple version. What I mean by that is that in order to shrink an image, the ideal solution would be to summarize the colors contained in a block of pixels down to one. But one way to do that summary is to simply pick one pixel to be the representative for the whole group. If you get that strategy working then you might think about more advanced statistical techniques such as using the median of the color values or taking an average of all of the color values in a block of pixels. For our final project from this section lets take parts from two different images and glue them into a new image. If your art department has a green screen this is a fun chance to put yourself into a scene of some kind. If not, its still fun to take parts of two images and blend them together. You can blend two images by averaging their pixel values. Of course if you prefer to have one image be “on top” of another image then you can just replace the pixel values of the bottom image with the top. Challenge: Can you figure out how to rotate your image by an arbitrary angle? Here’s a diagram that will give you a pretty big hint, but remember that in the diagram the x and y coordinates grow up and to the right with 0,0 in the lower left. However, in your image 0,0 is in the upper left corner and x and y grow down and to the right. Also you’ll have to be really careful about how you size your resulting image to make sure you have room for your rotated image. ## Image Kernels for Machine Learning¶ This is definitely a more advanced section, but if you are comfortable with all of the exercises up to now, you are going to like this. ### Cleaning up noise¶ Here is a “friend” of mine in a photo taken long ago. Its been in a box in the closet for years, gathering dust, getting crushed by books and generally aging as old photos tend to do. I recently scanned it to add it to my digital collection. But I’m not too happy with the result. noisyman.png Your job is to digitally restore my friend and make him look like new. How are you going to do that? Well, what do we have to work with? If you look at the image, most of the speckles are just one pixel that is out of whack caused by dust on the picture, or a small scratch. Clearly that pixel value is incorrect with respect to the pixels surrounding it. So we need to fix that. Your first inclination would be to find the bad pixels and fix only those, but there is an even easier solution for us. We can simply pretend that all pixels need to be fixed. There are two strategies we can use: 1. Replace every pixel with the average of the 8 pixels around it. 2. Replace every pixel with the median pixel value of the 8 pixels around it. This strategy should work pretty well as the “bad” pixels tend to be close to 0 or 255 whereas the good pixels are in more in the middle. To find the neighbors we will use some nested loops where we calculate the range of the loops based on the current pixel location. For example if we are trying to fix the pixels at row 11 and column 23 then we would want to look at the all the pixel values between row 10, column 22 and row 12 and column 24. This process of iterating over the neighbors of a pixel is called a kernel and is widely used in image processing. One word of caution before you dive into this, is that there is literally an “edge case” and a “corner case” that you need to worry about or your program will crash. That is the pixels around the edge do not have 8 neighbors. We can deal with this the hard way or the easy way. The hard way is to add some conditionals to your program to detect these edges and respond by dealing with a different number of neighbors. The easy way to deal with this is to make the tradeoff that the pixels at the edge of the image are fine as they are, and we can start fixing our image at row 1, column 1 and stop 1 column from the right and 1 row from the bottom. Now there are no special cases to worry about and you probably won’t even notice the difference. One super clever strategy is to use the max and min functions to figure out the correct neighbor indexes. You might try to figure this out if you are really a perfectionist. ### Smoothing¶ This exercises is really a remix of the last problem and a return to our image enlargement problem, and we can fix the blocky nature of the enlarged image by replacing each pixel with the average of its neighbors. ### Edge detection¶ the Sobel kernel has two parts to it one to calculate the gradient, that is how the darkness of the image is changing from left to right and another to measure how the darkness of the image is changing from top to bottom. $\begin{split} G_x = \left[ {\begin{array}{ccc} 1 & 0 & -1 \\ 2 & 0 & -2 \\ 1 & 0 & -1 \\ \end{array} } \right]\end{split}$ $\begin{split} G_y = \left[ {\begin{array}{ccc} 1 & 2 & 1 \\ 0 & 0 & 0 \\ -1 & -2 & -1 \\ \end{array} } \right]\end{split}$ You apply each of the kernels to the neighboring pixels by multiplying the neighbors by the value in the small matrix. Then we combine the x and y gradients using $$G = \sqrt{G_x^2 + G_y^2}$$ This definitely gives you a taste of why image processing requires so much computational power. its going to take a while for our Python in the browser to work its way over all of the pixels doing all of this computation for each one. Its also why this one is last as it can be really time consuming and frustrating to debug something. Post Project Questions During this project I was primarily in my... • Comfort Zone • Learning Zone • Panic Zone Completing this project took... • Very little time • A reasonable amount of time • More time than is reasonable Based on my own interests and needs, the things taught in this project... • Don't seem worth learning • May be worth learning • Are definitely worth learning For me to master the things taught in this project feels... • Definitely within reach • Within reach if I try my hardest • Out of reach no matter how hard I try
Environmental The key to a comprehensive environmental assessment is the subsurface investigation. Ground penetrating radar (GPR) plays an integral part by providing a non-intrusive means of examining the subsurface for environmental hazards such as soil contamination, underground storage tanks and drums. GPR can delineate landfills and pathways for contaminant flow, as well as conduct hydrogeologic investigations such as water table mapping. Environmental Assessment • Site Assessment • Underground Storage Tanks (USTs) and Drums • Water Table Mapping Environmental Utility Locating • Utility Mapping • Drilling Clearance • Landfill Delineation Environmental Assessment Environmental Assessment: Site Assessments Environmental scientists and land developers use GPR and EM to assist in their redevelopment efforts. These proven geophysical methods allow professionals to conduct a non-invasive investigation of the surface at a relatively low cost. This data shows a clearly defined boundary of potential toxic chemicals. The red line is a pipe that runs along the left edge of the data set. This data was collected with a 400 MHz antenna and post-processed in RADAN 7. Environmental Assessment: Underground Storage Tanks (USTs) and Drums Civil engineers, environmental consultants and environmental remediation specialists use GPR and EM to locate the position and impact of underground storage tanks. 2D GPR profile shows 4 USTs from a Mobil gas station in Colchester, Vermont. Traditional USTs were constructed in steel, recently they have been modernized to fiberglass. This data image is an example of fiberglass USTs, the GPR can denote the top of the UST as well as the ‘product’ levels it contains. Environmental Assessment: Water Table Mapping Hydrogeologists use GPR to determine the depth to water table and to identify potential pathways for subsurface flow. This data illustrates a well-defined water table. Elevation data has been corrected using topography data in RADAN 7. Data was collected with the SIR 4000 and 200 MHz antenna. Environmental Utility Locating Environmental Utility Locating: Utility Mapping Utility locators and engineers can locate the depth and position of metallic and non-metallic pipes in real time using ground penetrating radar technology. GPR can enhance one’s overall understanding of subsurface targets and obstructions. Data illustrates a 3-dimensional view of a survey of existing and abandoned utilities presented in RADAN 3D. Note the broken linear feature that denotes an abandoned utility. Data collected with a 400 MHz antenna. Environmental Utility Locating: Drilling Clearance Ground penetrating radar can detect what lies beneath the surface before drilling and trenching efforts. GPR technology allows users to safely identify subsurface features and utilities, and avoid costly or dangerous hits. Data set shows three utilities at varying depths. Utilities are located just above a clearly defined bedrock horizon. This data was collected with a 400 MHz antenna.
# Introduction Here I show how to produce P-value, S-value, likelihood, and deviance functions with the concurve package using fake data and data from real studies. Simply put, these functions are rich sources of information for scientific inference and the image below, taken from Xie & Singh, 20131 displays why. For a more extensive discussion of these concepts, see the following references.113 # Simple Models To get started, we could generate some normal data and combine two vectors in a dataframe library(concurve) set.seed(1031) GroupA <- rnorm(500) GroupB <- rnorm(500) RandomData <- data.frame(GroupA, GroupB) and look at the differences between the two vectors. We’ll plug these vectors and the dataframe they’re in inside of the curve_mean() function. Here, the default method involves calculating CIs using the Wald method. intervalsdf <- curve_mean(GroupA, GroupB, data = RandomData, method = "default" ) Each of the functions within concurve will generally produce a list with three items, and the first will usually contain the function of interest. head(intervalsdf[[1]], 10) #> lower.limit upper.limit intrvl.width intrvl.level cdf pvalue #> 1 -0.1125581 -0.1125581 0.000000e+00 0e+00 0.50000 1.0000 #> 2 -0.1125658 -0.1125504 1.543412e-05 1e-04 0.50005 0.9999 #> 3 -0.1125736 -0.1125427 3.086824e-05 2e-04 0.50010 0.9998 #> 4 -0.1125813 -0.1125350 4.630236e-05 3e-04 0.50015 0.9997 #> 5 -0.1125890 -0.1125273 6.173649e-05 4e-04 0.50020 0.9996 #> 6 -0.1125967 -0.1125195 7.717061e-05 5e-04 0.50025 0.9995 #> 7 -0.1126044 -0.1125118 9.260473e-05 6e-04 0.50030 0.9994 #> 8 -0.1126122 -0.1125041 1.080389e-04 7e-04 0.50035 0.9993 #> 9 -0.1126199 -0.1124964 1.234730e-04 8e-04 0.50040 0.9992 #> 10 -0.1126276 -0.1124887 1.389071e-04 9e-04 0.50045 0.9991 #> svalue #> 1 0.0000000000 #> 2 0.0001442767 #> 3 0.0002885679 #> 4 0.0004328734 #> 5 0.0005771935 #> 6 0.0007215279 #> 7 0.0008658768 #> 8 0.0010102402 #> 9 0.0011546179 #> 10 0.0012990102 We can view the function using the ggcurve() function. The two basic arguments that must be provided are the data argument and the “type” argument. To plot a consonance function, we would write “c”. (function1 <- ggcurve(data = intervalsdf[[1]], type = "c", nullvalue = TRUE)) We can see that the consonance “curve” is every interval estimate plotted, and provides the P-values, CIs, along with the median unbiased estimate It can be defined as such, $C V_{n}(\theta)=1-2\left|H_{n}(\theta)-0.5\right|=2 \min \left\{H_{n}(\theta), 1-H_{n}(\theta)\right\}$ Its information counterpart, the surprisal function, can be constructed by taking the $$-log_{2}$$ of the P-value.3,14,15 To view the surprisal function, we simply change the type to “s”. (function1 <- ggcurve(data = intervalsdf[[1]], type = "s")) We can also view the consonance distribution by changing the type to “cdf”, which is a cumulative probability distribution. The point at which the curve reaches 50% is known as the “median unbiased estimate”. It is the same estimate that is typically at the peak of the P-value curve from above. (function1s <- ggcurve(data = intervalsdf[[2]], type = "cdf", nullvalue = TRUE)) We can also get relevant statistics that show the range of values by using the curve_table() function. There are several formats that can be exported such as .docx, .ppt, and TeX. (x <- curve_table(data = intervalsdf[[1]], format = "image")) Lower Limit Upper Limit Interval Width Interval Level (%) CDF P-value S-value (bits) -0.132 -0.093 0.039 25.0 0.625 0.750 0.415 -0.154 -0.071 0.083 50.0 0.750 0.500 1.000 -0.183 -0.042 0.142 75.0 0.875 0.250 2.000 -0.192 -0.034 0.158 80.0 0.900 0.200 2.322 -0.201 -0.024 0.177 85.0 0.925 0.150 2.737 -0.214 -0.011 0.203 90.0 0.950 0.100 3.322 -0.233 0.008 0.242 95.0 0.975 0.050 4.322 -0.251 0.026 0.276 97.5 0.988 0.025 5.322 -0.271 0.046 0.318 99.0 0.995 0.010 6.644 # Comparing Functions If we wanted to compare two studies to see the amount of “consonance”, we could use the curve_compare() function to get a numerical output. First, we generate some more fake data GroupA2 <- rnorm(500) GroupB2 <- rnorm(500) RandomData2 <- data.frame(GroupA2, GroupB2) model <- lm(GroupA2 ~ GroupB2, data = RandomData2) randomframe <- curve_gen(model, "GroupB2") Once again, we’ll plot this data with ggcurve(). We can also indicate whether we want certain interval estimates to be plotted in the function with the “levels” argument. If we wanted to plot the 50%, 75%, and 95% intervals, we’d provide the argument this way: (function2 <- ggcurve(type = "c", randomframe[[1]], levels = c(0.50, 0.75, 0.95), nullvalue = TRUE)) Now that we have two datasets and two functions, we can compare them using the curve_compare() function. (curve_compare( data1 = intervalsdf[[1]], data2 = randomframe[[1]], type = "c", plot = TRUE, measure = "default", nullvalue = TRUE )) #> [1] "AUC = Area Under the Curve" #> [[1]] #> #> #> AUC 1 AUC 2 Shared AUC AUC Overlap (%) Overlap:Non-Overlap AUC Ratio #> ------ ------ ----------- ---------------- ------------------------------ #> 0.098 0.073 0.024 16.309 0.195 #> #> [[2]] This function will provide us with the area that is shared between the curve, along with a ratio of overlap to non-overlap. Another way to compare the functions is to use the cowplot plot_grid() function. cowplot::plot_grid(function1, function2) We can also do this for the surprisal function simply by changing type to “s”. (curve_compare( data1 = intervalsdf[[1]], data2 = randomframe[[1]], type = "s", plot = TRUE, measure = "default", nullvalue = FALSE )) #> [1] "AUC = Area Under the Curve" #> [[1]] #> #> #> AUC 1 AUC 2 Shared AUC AUC Overlap (%) Overlap:Non-Overlap AUC Ratio #> ------ ------ ----------- ---------------- ------------------------------ #> 3.947 1.531 1.531 38.801 0.634 #> #> [[2]] It’s clear that the outputs have changed and indicate far more overlap than before. # Constructing Functions From Single Intervals We can also take a set of confidence limits and use them to construct a consonance, surprisal, likelihood or deviance function using the curve_rev() function. This method is computed from the approximate normal distribution. Here, we’ll use two epidemiological studies16,17 that studied the impact of SSRI exposure in pregnant mothers, and the rate of autism in children. Both of these studies suggested a null effect of SSRI exposure on autism rates in children. curve1 <- curve_rev(point = 1.7, LL = 1.1, UL = 2.6, type = "c", measure = "ratio", steps = 10000) #> [1] 0.2194431 (ggcurve(data = curve1[[1]], type = "c", measure = "ratio", nullvalue = TRUE)) curve2 <- curve_rev(point = 1.61, LL = 0.997, UL = 2.59, type = "c", measure = "ratio", steps = 10000) #> [1] 0.2435408 (ggcurve(data = curve2[[1]], type = "c", measure = "ratio", nullvalue = TRUE)) The null value is shown via the red line and it’s clear that a large mass of the function is away from it. We can also see this by plotting the likelihood functions via the curve_rev() function. lik1 <- curve_rev(point = 1.7, LL = 1.1, UL = 2.6, type = "l", measure = "ratio", steps = 10000) #> [1] 0.2194431 (ggcurve(data = lik1[[1]], type = "l1", measure = "ratio", nullvalue = TRUE)) lik2 <- curve_rev(point = 1.61, LL = 0.997, UL = 2.59, type = "l", measure = "ratio", steps = 10000) #> [1] 0.2435408 (ggcurve(data = lik2[[1]], type = "l1", measure = "ratio", nullvalue = TRUE)) We can also view the amount of agreement between the likelihood functions of these two studies. (plot_compare( data1 = lik1[[1]], data2 = lik2[[1]], type = "l1", measure = "ratio", nullvalue = TRUE, title = "Brown et al. 2017. J Clin Psychiatry. vs. \nBrown et al. 2017. JAMA.", subtitle = "J Clin Psychiatry: OR = 1.7, 1/6.83 LI: LL = 1.1, UL = 2.6 \nJAMA: HR = 1.61, 1/6.83 LI: LL = 0.997, UL = 2.59", xaxis = expression(Theta ~ "= Hazard Ratio / Odds Ratio") )) and the consonance functions (plot_compare( data1 = curve1[[1]], data2 = curve2[[1]], type = "c", measure = "ratio", nullvalue = TRUE, title = "Brown et al. 2017. J Clin Psychiatry. vs. \nBrown et al. 2017. JAMA.", subtitle = "J Clin Psychiatry: OR = 1.7, 1/6.83 LI: LL = 1.1, UL = 2.6 \nJAMA: HR = 1.61, 1/6.83 LI: LL = 0.997, UL = 2.59", xaxis = expression(Theta ~ "= Hazard Ratio / Odds Ratio") )) # References 1. Xie M-g, Singh K. Confidence Distribution, the Frequentist Distribution Estimator of a Parameter: A Review. International Statistical Review. 2013;81(1):3-39. doi:10.1111/insr.12000 2. Birnbaum A. A unified theory of estimation, I. The Annals of Mathematical Statistics. 1961;32(1):112-135. doi:10.1214/aoms/1177705145 3. Chow ZR, Greenland S. Semantic and Cognitive Tools to Aid Statistical Inference: Replace Confidence and Significance by Compatibility and Surprise. arXiv:190908579 [statME]. September 2019. http://arxiv.org/abs/1909.08579. 4. Fraser DAS. P-Values: The Insight to Modern Statistical Inference. Annual Review of Statistics and Its Application. 2017;4(1):1-14. doi:10.1146/annurev-statistics-060116-054139 5. Fraser DAS. The P-value function and statistical inference. The American Statistician. 2019;73(sup1):135-147. doi:10.1080/00031305.2018.1556735 6. Poole C. Beyond the confidence interval. American Journal of Public Health. 1987;77(2):195-199. doi:10.2105/AJPH.77.2.195 7. Poole C. Confidence intervals exclude nothing. American Journal of Public Health. 1987;77(4):492-493. doi:10.2105/ajph.77.4.492 8. Schweder T, Hjort NL. Confidence and Likelihood*. Scand J Stat. 2002;29(2):309-332. doi:10.1111/1467-9469.00285 9. Schweder T, Hjort NL. Confidence, Likelihood, Probability: Statistical Inference with Confidence Distributions. Cambridge University Press; 2016. 10. Singh K, Xie M, Strawderman WE. Confidence distribution (CD) – distribution estimator of a parameter. August 2007. http://arxiv.org/abs/0708.0976. 11. Sullivan KM, Foster DA. Use of the confidence interval function. Epidemiology. 1990;1(1):39-42. doi:10.1097/00001648-199001000-00009 12. Whitehead J. The case for frequentism in clinical trials. Statistics in Medicine. 1993;12(15-16):1405-1413. doi:10.1002/sim.4780121506 13. Rothman KJ, Greenland S, Lash TL. Precision and statistics in epidemiologic studies. In: Rothman KJ, Greenland S, Lash TL, eds. Modern Epidemiology. 3rd ed. Lippincott Williams & Wilkins; 2008:148-167. 14. Greenland S. Valid P-values behave exactly as they should: Some misleading criticisms of P-values and their resolution with S-values. The American Statistician. 2019;73(sup1):106-114. doi:10.1080/00031305.2018.1529625 15. Shannon CE. A mathematical theory of communication. The Bell System Technical Journal. 1948;27(3):379-423. doi:10.1002/j.1538-7305.1948.tb01338.x 16. Brown HK, Ray JG, Wilton AS, Lunsky Y, Gomes T, Vigod SN. Association between serotonergic antidepressant use during pregnancy and autism spectrum disorder in children. JAMA. 2017;317(15):1544-1552. doi:10.1001/jama.2017.3415 17. Brown HK, Hussain-Shamsy N, Lunsky Y, Dennis C-LE, Vigod SN. The association between antenatal exposure to selective serotonin reuptake inhibitors and autism: A systematic review and meta-analysis. The Journal of Clinical Psychiatry. 2017;78(1):e48-e58. doi:10.4088/JCP.15r10194
I have forgotten • https://me.yahoo.com # Separable This section contains worked examples of the type of differential equation which can be solved by integration View other versions (2) ## Separable Differential Equations This section contains worked examples of the type of differential equation which can be solved by direct Integration. ### Definition Separable Differential Equations are differential equations which respect one of the following forms : • where is a two variable function, also continuous. • , where and are two real continuous functions. ### Rational Functions A rational function on is a function which can be expressed as where are two polynomials. Example: ##### Example - Simple Differential Equation Problem Solve: Workings As the equation is of first order, integrate the function twice, i.e. and Solution ### Trigonometric Functions A rational function on is a function which can be expressed as a combination of trigonometric functions (). Example: ##### Example - Simple Cosine Problem Workings This is the same as which we integrate in the normal way to yield Solution ### Physics Examples Example: ##### Example - Potential example Problem If a and b are the radii of concentric spherical conductors at potentials of respectively, then V is the potential at a distance r from the centre. Find the value of V if: and at r=a and at r=b Workings Substituting in the given values for V and r and Thus Solution
# graph.famous 0th Percentile ##### Creating named graphs There are some famous, named graphs, sometimes counterexamples to some conjecture or unique graphs with given features. These can be created with this function Keywords graphs ##### Usage graph.famous(name) ##### Arguments name Character constant giving the name of the graph. It is case insensitive. ##### Details graph.famous knows the following graphs: • Bull {The bull graph, 5 vertices, 5 edges, resembles to the head of a bull if drawn properly.} Chvatal{This is the smallest triangle-free graph that is both 4-chromatic and 4-regular. According to the Grunbaum conjecture there exists an m-regular, m-chromatic graph with n vertices for every m>1 and n>2. The Chvatal graph is an example for m=4 and n=12. It has 24 edges.} Coxeter{A non-Hamiltonian cubic symmetric graph with 28 vertices and 42 edges.} Cubical{The Platonic graph of the cube. A convex regular polyhedron with 8 vertices and 12 edges.} Diamond{A graph with 4 vertices and 5 edges, resembles to a schematic diamond if drawn properly.} Dodecahedral, Dodecahedron{Another Platonic solid with 20 vertices and 30 edges.} Folkman{The semisymmetric graph with minimum number of vertices, 20 and 40 edges. A semisymmetric graph is regular, edge transitive and not vertex transitive.} Franklin{This is a graph whose embedding to the Klein bottle can be colored with six colors, it is a counterexample to the neccessity of the Heawood conjecture on a Klein bottle. It has 12 vertices and 18 edges.} Frucht{The Frucht Graph is the smallest cubical graph whose automorphism group consists only of the identity element. It has 12 vertices and 18 edges.} Grotzsch{The Grötzsch graph is a triangle-free graph with 11 vertices, 20 edges, and chromatic number 4. It is named after German mathematician Herbert Grötzsch, and its existence demonstrates that the assumption of planarity is necessary in Grötzsch's theorem that every triangle-free planar graph is 3-colorable.} Heawood{The Heawood graph is an undirected graph with 14 vertices and 21 edges. The graph is cubic, and all cycles in the graph have six or more edges. Every smaller cubic graph has shorter cycles, so this graph is the 6-cage, the smallest cubic graph of girth 6.} Herschel{The Herschel graph is the smallest nonhamiltonian polyhedral graph. It is the unique such graph on 11 nodes, and has 18 edges.} House{The house graph is a 5-vertex, 6-edge graph, the schematic draw of a house if drawn properly, basicly a triangle of the top of a square.} HouseX{The same as the house graph with an X in the square. 5 vertices and 8 edges.} Icosahedral, Icosahedron{A Platonic solid with 12 vertices and 30 edges.} Krackhardt_Kite{A social network with 10 vertices and 18 edges. Krackhardt, D. Assessing the Political Landscape: Structure, Cognition, and Power in Organizations. Admin. Sci. Quart. 35, 342-369, 1990.} Levi{The graph is a 4-arc transitive cubic graph, it has 30 vertices and 45 edges.} McGee{The McGee graph is the unique 3-regular 7-cage graph, it has 24 vertices and 36 edges.} Meredith{The Meredith graph is a quartic graph on 70 nodes and 140 edges that is a counterexample to the conjecture that every 4-regular 4-connected graph is Hamiltonian.} Noperfectmatching{A connected graph with 16 vertices and 27 edges containing no perfect matching. A matching in a graph is a set of pairwise non-adjacent edges; that is, no two edges share a common vertex. A perfect matching is a matching which covers all vertices of the graph.} Nonline{A graph whose connected components are the 9 graphs whose presence as a vertex-induced subgraph in a graph makes a nonline graph. It has 50 vertices and 72 edges.} Octahedral, Octahedron{Platonic solid with 6 vertices and 12 edges.} Petersen{A 3-regular graph with 10 vertices and 15 edges. It is the smallest hypohamiltonian graph, ie. it is non-hamiltonian but removing any single vertex from it makes it Hamiltonian.} Robertson{The unique (4,5)-cage graph, ie. a 4-regular graph of girth 5. It has 19 vertices and 38 edges.} Smallestcyclicgroup{A smallest nontrivial graph whose automorphism group is cyclic. It has 9 vertices and 15 edges.} Tetrahedral, Tetrahedron{Platonic solid with 4 vertices and 6 edges.} Thomassen{The smallest hypotraceable graph, on 34 vertices and 52 edges. A hypotracable graph does not contain a Hamiltonian path but after removing any single vertex from it the remainder always contains a Hamiltonian path. A graph containing a Hamiltonian path is called tracable.} Tutte{Tait's Hamiltonian graph conjecture states that every 3-connected 3-regular planar graph is Hamiltonian. This graph is a counterexample. It has 46 vertices and 69 edges.} Uniquely3colorable{Returns a 12-vertex, triangle-free graph with chromatic number 3 that is uniquely 3-colorable.} Walther{An identity graph with 25 vertices and 31 edges. An identity graph has a single graph automorphism, the trivial one.} ##### Value • A graph object. ##### encoding UTF-8 graph can create arbitrary graphs, see also the other functions on the its manual page for creating special graphs. • graph.famous ##### Examples solids <- list(graph.famous("Tetrahedron"), graph.famous("Cubical"), graph.famous("Octahedron"), graph.famous("Dodecahedron"), graph.famous("Icosahedron")) Documentation reproduced from package igraph, version 0.5.1, License: GPL (>= 2) ### Community examples Looks like there are no examples yet.
Coin toss A fair coin is tossed 8 times, find the probability that the resulting sequences of heads and tails looks same when viewed from the beginning or from the end. If answer is in the form $$\dfrac{a}{b}$$ where $$a$$ and $$b$$ are coprime positive integers, then find $$a+b$$. ×
# Suppose Suppose you know that the length of a line segment is 15, x2=6, y2=14 and x1= -3. Find the possible value of y1. Is there more than one possible answer? Why or why not? Correct result: y11 =  26 y12 =  2 #### Solution: Our quadratic equation calculator calculates it. ${y}_{12}={q}_{2}=2$ We would be pleased if you find an error in the word problem or inaccuracies and send it to us. Thank you! Showing 1 comment: Matematik we make circle k with centre S(x2,y2) and radius r = 15 . Then we make vertical line x= -3 . It make two intersections with circle k thus solutions are two: y11,y12. Tips to related online calculators For Basic calculations in analytic geometry is a helpful line slope calculator. From coordinates of two points in the plane it calculate slope, normal and parametric line equation(s), slope, directional angle, direction vector, the length of segment, intersections the coordinate axes etc. Looking for help with calculating roots of a quadratic equation? Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation? Do you want to convert length units? Pythagorean theorem is the base for the right triangle calculator. #### You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: ## Related math problems and questions: • Prove Prove that k1 and k2 are the equations of two circles. Find the equation of the line that passes through the centers of these circles. k1: x2+y2+2x+4y+1=0 k2: x2+y2-8x+6y+9=0 • Intersections 3 Find the intersections of the circles x2 + y2 + 6 x - 10 y + 9 = 0 and x2 + y2 + 18 x + 4 y + 21 = 0 • Trapezoid 15 Area of trapezoid is 266. What value is x if bases b1 is 2x-3, b2 is 2x+1 and height h is x+4 • Curve and line The equation of a curve C is y=2x² -8x+9 and the equation of a line L is x+ y=3 (1) Find the x co-ordinates of the points of intersection of L and C. (2) Show that one of these points is also the stationary point of C? • Sphere equation Obtain the equation of sphere its centre on the line 3x+2z=0=4x-5y and passes through the points (0,-2,-4) and (2,-1,1). • Find the 15 Find the tangent line of the ellipse 9 x2 + 16 y2 = 144 that has the slope k = -1 Which of the points belong function f:y= 2x2- 3x + 1 : A(-2, 15) B (3,10) C (1,4) • Non linear eqs Solve the system of non-linear equations: 3x2-3x-y=-2 -6x2-x-y=-7 • Sphere from tree points Equation of sphere with three point (a,0,0), (0, a,0), (0,0, a) and center lies on plane x+y+z=a • Square area Complete the table and then draw each square. Provide exact lengths. Describe any problems you have. Side Length Area Square 1 1 unit2 Square 2 2 units2 Square 3 4 units2 • Distance problem 2 A=(x,2x) B=(2x,1) Distance AB=√2, find value of x • Find the 5 Find the equation of the circle with center at (1,20), which touches the line 8x+5y-19=0 • Ellipse Ellipse is expressed by equation 9x2 + 25y2 - 54x - 100y - 44 = 0. Find the length of primary and secondary axes, eccentricity, and coordinates of the center of the ellipse. • Three points Three points A (-3;-5) B (9;-10) and C (2;k) . AB=AC What is value of k? • Find x 2 Find x, y, and z such that x³+y³+z³=k, for each k from 1 to 100. Write down the number of solutions.
# Partial Derivatives ## Two Variables Let be a function of and ; for example The partial derivative of with respect to is the function obtained by differentiating with respect to , treating as a constant; in this case The partial derivative of with respect to is the function obtained by differentiating with respect to , treating as a constant; in this case These partial derivatives are formally defined as limits: Example 1. For we have and Example 2. For the function we have The number gives the rate of change with respect to of the function the number gives the rate of change with respect to of the function ## Geometric Interpretation Through the surface we pass a plane parallel to the -plane. The plane intersects the surface in a curve, the -section of the surface. The -section of the surface is the graph of the function Differentiating with respect to , we have and in particular The number is thus the slope of the -section of the surface at the point The other partial derivative can be given a similar interpretation. The same surface is sliced by a plane parallel to the -plane. The plane intersects the surface in a curve, -section of the surface. The -section of the surface is the graph of the function Differentiating, this time with respect to , we have and thus The number is the slope of the -section of the surface at the point ## Three Variables In the case of a function of three variables, there are three partial derivatives: the partial with respect to , the partial with respect to , and also the partial with respect to . These partials are defined as follows: Each partial can be found by differentiating with respect to the subscript variable, treating the other two variables as constants. Example 3. For the function the partial derivatives are In particular Example 4. For we have Example 5. For function of the form we can write The number gives the rate of change with respect to of at ; gives the rate of change with respect to of at and gives the rate of change with respect to of at Example 6. The function has partial derivatives The numbers gives the rate of change with respect to of the function gives the rate of change with respect to of the function gives the rate of change with respect to of the function ## Other Notations There is obviously no need to restrict ourselves to the variables Where convenient we can use other letters. Example 7. The volume of the frustum of a cone is given by the function At time , Find the rate of change of the volume with respect to each of its dimensions at time if the other dimensions are held constant. Solution. The partial derivatives of are as follows: At time , the rate of change of with respect to is the rate of change of with respect to is the rate of change of with respect to is The subscript notation is not the only one used for partial differentiation. A variant of Leibniz's double-d notation is also commonly used. In this notation the partials are denoted by Thus, for we have or more simply, We can also write The double-decker'' notation is not restricted to the letters For we can write For the function we have # Limits and Continuity; Equality of Mixed Partials ## Definition of the limit of a function of Several Variables Let The function is said to have a limit at if for each there exists such that, if then In this case we write Example 1. We will show that the function does not have a limit at Along the obvious paths to , the coordinate axes, the limiting value is : along the -axis, and thus tends to ; along the -axis, and thus tends to . However, along the line the limiting value is We have shown that not all paths to yield the same limiting value. It follows that does not have a limit at As in the one-variable case, the limit (if it exists) is unique. Moreover, if then and To say that is continuous at is to say that or, equivalently, that For two variables we can write and for three variables To say that is continuous on is to say that is continuous at all points of . # Some Examples of Continuous Functions Polynomials in several variables, for example, are everywhere continuous. In the two-variable case, that means continuity at each point of the -plane; in the three-variable case, continuity at each point of three-space. Rational functions (quotients of polynomials) are continuous everywhere except where the denominator is zero. Thus is continuous at each point of the -plane other than the origin is continuous except on the line is continuous except on the parabola is continuous at each point of three-space other than the origin is continuous except on the plane . More elaborate continuous functions can be constructed by forming composites: take, for example, The first function is continuous except along the vertical plane . The other two functions are continuous at each point of space. The continuity of such composites follows from a simple theorem that we state and prove below. In the theorem, is a function of several variables, but is a function of a single variable. # Continuity of Composite Functions Theorem. If is continuous at the point and is continuous at the number then the composition is continuous at the point . Proof. We begin with . We must show that there exists 0 such that From the continuity of at we know that there exists such that From the continuity of at , we know that there exists such that This last obviously works; namely, # Continuity in Each Variable Separately A continuous function of several variables is continuous in each of its variables separately. In the two-variable case, this means that, if then The converse is false. Example 2. We set Since we have Thus, at the point , is continuous in and continuous in . However, as a function of two variables, is not continuous at One way to see this is to note that we can approach as closely as we wish by points of the form with . At such points takes on the value : Hence, cannot tend to as required. # Continuity and Partial Differentiability For functions of a single variable the existence of the derivative guarantees continuity. For functions of several variables the existence of partial derivatives fails to guarantee continuity. To show this, we can use the same function Since both and are constantly zero, both partials exist (and are zero) at , and yet, the function is discontinuous at It is not hard to understand how a function can have partial derivatives and yet fail to be continuous. The existence of at depends on the behavior of only at points of the form . Similarly, the existence of at depends on the behavior of only at points of the form . On the other hand, continuity at depends on the behavior of at points of the more general form . More briefly, we can put it this way: the existence of a partial derivative depends on the behavior of the function along a line segment (two directions), whereas continuity depends on the behavior of the function in all directions. # Equality of Mixed Partials Suppose that is a function of and with first partials These are again functions of and and may themselves possess partial derivatives: These last functions are called the second-order partials. Note that there are two mixed'' partials The first of these is obtained by differentiating first with respect to and then with respect to . The second is obtained by differentiating first with respect to and then with respect to . Example 3. The function has first partials The second-order partials are Example 4. Setting we have The second-order partials are Notice that in both examples we had Since in neither case was symmetric in and , this equality of the mixed partials was not due to symmetry. Actually it was due to continuity. Theorem. If and its partials are continuous then the mixed partials Proof. Fix Assume that Then there exists such that for From the fundamental theorem of calculus we have Similarly, Since the two iterated integrals are equal we have In the case of a function of three variables we look for three first partials and nine second partials Here again, there is equality of the mixed partials provided that and its first and second partials are continuous. Example 5. For we have This document created by Scientific WorkPlace 4.0.
Old Web English We investigate the slow-motion and weak-field approximation of the general ghost-free parity-violating (PV) theory of gravity in the parametrized post-Newtonian (PPN) framework, and derive the perturbative field equations, which are modified by the PV terms of this theory. The complete PPN parameters are obtained by solving the perturbative field equations. We find that all the PPN parameters are exactly same with those in general relativity, except for an extra parameter $\kappa$, which is caused by the a new crul-type term in the gravitomagnetic sector of the metric in this theory. We calculate the precession effects of gyroscopes in this theory and constrain the model parameters by the observations of Gravity Probe B experiment.
Journal topic Atmos. Meas. Tech., 13, 1693–1707, 2020 https://doi.org/10.5194/amt-13-1693-2020 Atmos. Meas. Tech., 13, 1693–1707, 2020 https://doi.org/10.5194/amt-13-1693-2020 Research article 07 Apr 2020 Research article | 07 Apr 2020 # Evaluation and calibration of a low-cost particle sensor in ambient conditions using machine-learning methods Evaluation and calibration of a low-cost particle sensor in ambient conditions using machine-learning methods Minxing Si1,2,, Ying Xiong1,, Shan Du3, and Ke Du1 Minxing Si et al. • 1Department of Mechanical and Manufacturing Engineering, University of Calgary, 2500 University Drive, T2N 1N4, NW, Calgary, AB, Canada • 2Tetra Tech Canada Inc., 140 Quarry Park Blvd, T2C 3G3, Calgary, AB, Canada • These authors contributed equally to this work. Correspondence: Ke Du (kddu@ucalgary.ca) Abstract Particle sensing technology has shown great potential for monitoring particulate matter (PM) with very few temporal and spatial restrictions because of its low cost, compact size, and easy operation. However, the performance of low-cost sensors for PM monitoring in ambient conditions has not been thoroughly evaluated. Monitoring results by low-cost sensors are often questionable. In this study, a low-cost fine particle monitor (Plantower PMS 5003) was colocated with a reference instrument, the Synchronized Hybrid Ambient Real-time Particulate (SHARP) monitor, at the Calgary Varsity air monitoring station from December 2018 to April 2019. The study evaluated the performance of this low-cost PM sensor in ambient conditions and calibrated its readings using simple linear regression (SLR), multiple linear regression (MLR), and two more powerful machine-learning algorithms using random search techniques for the best model architectures. The two machine-learning algorithms are XGBoost and a feedforward neural network (NN). Field evaluation showed that the Pearson correlation (r) between the low-cost sensor and the SHARP instrument was 0.78. The Fligner and Killeen (F–K) test indicated a statistically significant difference between the variances of the PM2.5 values by the low-cost sensor and the SHARP instrument. Large overestimations by the low-cost sensor before calibration were observed in the field and were believed to be caused by the variation of ambient relative humidity. The root mean square error (RMSE) was 9.93 when comparing the low-cost sensor with the SHARP instrument. The calibration by the feedforward NN had the smallest RMSE of 3.91 in the test dataset compared to the calibrations by SLR (4.91), MLR (4.65), and XGBoost (4.19). After calibrations, the F–K test using the test dataset showed that the variances of the PM2.5 values by the NN, XGBoost, and the reference method were not statistically significantly different. From this study, we conclude that a feedforward NN is a promising method to address the poor performance of low-cost sensors for PM2.5 monitoring. In addition, the random search method for hyperparameters was demonstrated to be an efficient approach for selecting the best model structure. 1 Introduction Particulate matter (PM), whether it is natural or anthropogenic, has pronounced effects on human health, visibility, and global climate (Charlson et al., 1992; Seinfeld and Pandis, 1998). To minimize the harmful effects of PM pollution, the Government of Canada launched the National Air Pollution Surveillance (NAPS) program in 1969 to monitor and regulate PM and other criteria air pollutants in populated regions, including ozone (O3), sulfur dioxide (SO2), carbon monoxide (CO), and nitrogen dioxide (NO2). Currently, PM monitoring is routinely carried out at 286 designated air sampling stations in 203 communities in all provinces and territories of Canada (Government of Canada, 2019). Many of the monitoring stations use a beta attenuation monitor (BAM), which is based on the adsorption of beta radiation, or a tapered element oscillating microbalance (TEOM) instrument, which is a mass-based technology to measure PM concentrations. An instrument that combines two or more technologies, such as the Synchronized Hybrid Ambient Real-time Particulate (SHARP) monitor, is also used in some monitoring stations. The SHARP instrument combines light scattering with beta attenuation technologies to determine PM concentrations. Although these instruments are believed to be accurate for measuring PM concentration and have been widely used by many air monitoring stations worldwide (Chow and Watson, 1998; Patashnick and Rupprecht, 1991), they have common drawbacks: they can be challenging to operate, bulky, and expensive. The instrument costs from CAD 8000 (Canadian dollars) to tens of thousands of dollars (Chong and Kumar, 2003). The SHARP instrument used in this study as a reference method costs approximately CAD 40 000 (CD Nova Instruments Ltd., 2017). Significant resources, such as specialized personnel and technicians, are also required for regular system calibration and maintenance. In addition, the sparsely spread stations may only represent PM levels in limited areas near the stations because PM concentrations vary spatially and temporally depending on local emission sources as well as meteorological conditions (Xiong et al., 2017). Such a low-resolution PM monitoring network cannot support public exposure and health effects studies that are related to PM because these studies require high-spatial- and temporal-resolution monitoring networks in the community (Snyder et al., 2013). In addition, the well-characterized scientific PM monitors are not portable due to their large size and volumetric flow rate, which means they are not practical for measuring personal PM exposure (White et al., 2012). As a possible solution to the above problems, a large number of low-cost PM sensors could be deployed, and a high-resolution PM monitoring network could be constructed. Low-cost PM sensors are portable and commercially available. They are cost-effective and easy to deploy, operate, and maintain, which offers significant advantages compared to conventional analytical instruments. If many low-cost sensors are deployed, PM concentrations can be monitored continuously and simultaneously at multiple locations for a reasonable cost (Holstius et al., 2014). A dense monitoring network using low-cost sensors can also assist in mapping hot spots of air pollution, creating emission inventories of air pollutants, and estimating adverse health effects due to personal exposure to PM (Kumar et al., 2015). However, low-cost sensors present challenges for broad application and installation. Most sensor systems have not been thoroughly evaluated (Williams et al., 2014), and the data generated by these sensors are of questionable quality (Wang et al., 2015). Currently, most low-cost sensors are based on laser light-scattering (LLS) technology, and the accuracy of LLS is mostly affected by particle composition, size distribution, shape, temperature, and relative humidity (Jayaratne et al., 2018; Wang et al., 2015). Several studies have evaluated LLS sensors by comparing the performance of low-cost sensors with medium- to high-cost instruments under laboratory and ambient conditions. For example, Zikova et al. (2017) used low-cost Speck monitors to measure PM2.5 concentrations in indoor and outdoor environments, and the low-cost sensors overestimated the concentration by 200 % for indoor and 500 % for outdoor compared to a reference instrument – the Grimm 1.109 dust monitor. Jayaratne et al. (2018) reported that PM10 concentrations generated by a Plantower low-cost particle sensor (PMS 1003) were 46 % greater than a TSI 8350 DustTrak DRX aerosol monitor under a foggy environment. Wang et al. (2015) compared PM measurements from three low-cost LLS sensors – Shinyei PPD42NS, Samyoung DSM501A, and Sharp GP2Y1010AU0F – with a SidePack (TSI Inc.) using smoke from burning incense. High linearity was found with R2 greater than 0.89, but the responses depended on particle composition, size, and humidity. The Air Quality Sensor Performance Evaluation Center (AQ-SPEC) of the South Coast Air Quality Management District (SCAQMD) also evaluated the performances of three Purple Air PA-II sensors (model: Plantower PMS 5003) by comparing their readings with two United States Environmental Protection Agency (US EPA) Federal Equivalent Method (FEM) instruments – BAM (MetOne) and Grimm dust monitors in laboratory and field environments in southern California (Papapostolou et al., 2017). Overall, the three sensors showed moderate to good accuracy compared to the reference instrument for PM2.5 for a concentration range between 0 and 250 µg m−3. Lewis et al. (2016) evaluated low-cost sensors in the field for O3, nitrogen oxide (NO), NO2, volatile organic compounds (VOCs), PM2.5, and PM10; only the O3 sensors showed good performance compared to the reference measurements. Several studies have developed calibration models using multiple techniques to improve low-cost sensor performance. For example, De Vito et al. (2008) tested feedforward neural network (NN) calibration for benzene monitoring and reported that further calibration was needed for low concentrations. Bayesian optimization was also used to search feedforward NN structures for the calibrations of CO, NO2, and NOx low-cost sensors (De Vito et al., 2009). Zheng et al. (2018) calibrated the Plantower low-cost particle sensor PMS 3003 by fitting a linear least-squares regression model. A nonlinear response was observed when ambient PM2.5 exceeded 125 µg m−3. The study concluded that a quadratic fit was more appropriate than a linear model to capture this nonlinearity. Zimmerman et al. (2018) explored three different calibration models, including laboratory univariate linear regression, empirical MLR, and a more modern machine-learning algorithm, random forests (RF), to improve the Real-time Affordable Multiple-Pollutant (RAMP) sensor's performance. They found that the sensors calibrated by RF models showed improved accuracy and precision over time, with average relative errors of 14 % for CO, 2 % for CO2, 29 % for NO2, and 15 % for O3. The study concluded that combing RF models with low-cost sensors is a promising approach to address the poor performance of low-cost air quality sensors. Spinelle et al. (2015) reported several calibration methods for low-cost O3 and NO2 sensors. The best calibration method for NO2 was an NN algorithm with feedforward architecture. O3 could be calibrated by simple linear regression (SLR). Spinelle et al. (2017) also evaluated and calibrated NO, CO, and CO2 sensors, and the calibrations by feedforward NN architectures showed the best results. Similarly, Cordero et al. (2018) performed a two-step calibration for an AQmesh NO2 sensor using supervised machine-learning regression algorithms, including NNs, RFs, and support vector machines (SVMs). The first step produced an explanatory variable using multivariate linear regression. In the second step, the explanatory variable was fed into machine-learning algorithms, including RF, SVM, and NN. After the calibration, the AQmesh NO2 sensor met the standards of accuracy for high concentrations of NO2 in the European Union's Directive 2008/50/EC on air quality. The results highlighted the need to develop an advanced calibration model, especially for each sensor, as the responses of individual sensors are unique. Williams et al. (2014) evaluated eight low-cost PM sensors; the study showed frequent disagreement between the low-cost PM sensors and FEMs. In addition, the study concluded that the performances of the low-cost sensors were significantly impacted by temperature and relative humidity (RH). Recurrent NN architectures were also tested for calibrating some gas sensors (De Vito et al., 2018; Esposito et al., 2016). The results showed that the dynamic approaches performed better than traditional static calibration approaches. Calibrations of PM2.5 sensors were also reported in recent studies. Lin et al. (2018) performed two-step calibrations for PM2.5 sensors using 236 hourly data points collected on buses and road-cleaning vehicles. The first step was to construct a linear model, and the second step used RF machine learning for further calibration. The RMSE after the calibrations was 14.76 µg m−3 compared to a reference method. The reference method used in this study was a Dylos DCI1700 device, which is not a US EPA federal reference method (FRM) or FEM. Loh and Choi (2019) trained and tested the SVM, K-nearest neighbor, RF, and XGBoost machine-learning algorithms to calibrate PM2.5 sensors using 319 hourly data points. XGBoost archived the best performance with an RMSE of 5.0 µg m−3. However, the low-cost sensors in this study were not colocated with the reference method, and the machine-learning models were not tested using unseen data (test data) for predictive power and overfitting. Although there have been studies on calibrating low-cost sensors, most of them focused on gas sensors or used short-term data to calibrate PM sensors. To our best knowledge, no one has reported studies on PM sensor calibration using random search techniques for the best machine-learning model configuration under ambient conditions during different seasons. In this study, a low-cost fine particle monitor (Plantower PMS 5003) was colocated with a SHARP monitor model 5030 at Calgary Varsity air monitoring station in an outdoor environment from 7 December 2018 to 26 April 2019. The SHARP instrument is the reference method in this study and is a US EPA FEM (US EPA, 2016). The objectives of this study are (1) to evaluate the performance of the low-cost PM sensor in a range of outdoor environmental conditions by comparing its PM2.5 readings with those obtained from the SHARP instrument and (2) to assess four calibration methods: (a) an SLR or univariate linear regression based on the low-cost sensor values; (b) a multiple linear regression (MLR) using the PM2.5, RH, and temperature measured by the low-cost sensor as predictors; (c) a decision-tree-based ensemble algorithm, called XGBoost or Extreme Gradient Boosting; and (d) a feedforward NN architecture with a back-propagation algorithm. XGBoost and NN are the most popular algorithms used on Kaggle – a platform for data science and machine-learning competition. In 2015, 17 winners in 29 competitions on Kaggle used XGBoost, and 11 winners used deep NN algorithms (Chen and Guestrin, 2016). This study is unique in the following ways. 1. To the best of our knowledge, this is the first comprehensive study using long-term data to calibrate low-cost particle sensors in the field. Most previous studies focused on calibrating gas sensors (Maag et al., 2018). There are two studies on PM sensor calibrations using machine learning, but they used a short-term dataset that did not include seasonal changes in ambient conditions (Lin et al., 2018; Loh and Choi, 2019). The shortcomings of the two studies were discussed above. 2. Although several studies have researched the calibration of gas sensors using NN, this study explores multiple hyperparameters to search for the best NN architecture. Previous research configured one to three hyperparameters compared to six in this study (De Vito et al., 2008, 2009, 2018; Esposito et al., 2016; Spinelle et al., 2015, 2017). In addition, this study tested the rectified linear unit (ReLU) as the activation function in the feedforward NN. Compared to the sigmoid and tanh activation functions used in previous studies for NN calibration models, the ReLU function can accelerate the convergence of stochastic gradient descent to a factor of 6 (Krizhevsky et al., 2017). 3. Previous NN and tree-based calibration models used a manual search or grid search for hyperparameter tuning. This study introduced a random search method for the best calibration models. A random search is more efficient than a traditional manual and grid search (Bergstra and Bengio, 2012) and evaluates more of the search space, especially when the search space is more than three dimensions (Timbers, 2017). Zheng (2015) explained that a random search with 60 samples will find a close-to-optimal combination with 95 % probability. 2 Method ## 2.1 Data preparation One low-cost sensor unit was provided by the Calgary-based company SensorUp and deployed at the Varsity station in the Calgary Region Airshed Zone (CRAZ) in Calgary, Alberta, Canada. The unit contains one sensor, one electrical board, and one housing as a shelter. The sensor in the unit is the Plantower PMS 5003, and it measured outdoor fine particle (PM2.5) concentrations (µg m−3), air temperature (C), and RH (%) every 6 s. The minimum detectable particle diameter by the sensor is 0.3 µm. The instrument costs approximately CAD 20 and is referred to as the low-cost sensor in this paper. The low-cost sensor is based on LLS technology; PM2.5 mass concentration is estimated from the detected amount of scattered light. The LLS sensor is installed on the electrical board and then placed in the shelter for outdoor monitoring. The unit has a wireless link to a router in the Varsity station. A picture of the low-cost sensor and the monitoring environment in which the low-cost sensor unit and the SHARP instrument were colocated on the roof of the Varsity station is provided in Fig. 1. The location of the Varsity station is provided in Fig. 2. The router uses cellular service to transfer the data from the low-cost sensor to SensorUp's cloud data storage system. The measured outdoor PM2.5, temperature, and RH data at a 6 s interval from 00:00 on 7 December 2018 to 23:00 on 26 April 2019 were downloaded from the cloud data storage system for evaluation and calibration. Figure 1The low-cost sensor used in the study and the ambient inlet of the reference method – SHARP model 5030. Figure 2Location of the Varsity air monitoring station. The map was created using ArcGIS®. The administrative boundaries in Canada and imagery data were provided by Natural Resources Canada (2020) and DigitalGlobe (2019). Figure 3Example of a neural network structure. The reference instrument used to evaluate the low-cost sensor is a Thermal Fisher Scientific SHARP model 5030. The SHARP instrument was installed at the Calgary Varsity station by CRAZ. The SHARP instrument continuously uses two compatible technologies, light scattering and beta attenuation, to measure PM2.5 every 6 min with an accuracy of ±5 %. The SHARP instrument is operated and maintained by CRAZ in accordance with the provincial government's guidelines outlined in Alberta's air monitoring directive. The instrument was calibrated monthly. Hourly PM2.5 data are published on the Alberta Air Data Warehouse website (http://www.airdata.alberta.ca/, last access: 3 June 2019). The Calgary Varsity station also continuously monitors CO, methane, oxides of nitrogen, non-methane hydrocarbons, outdoor air temperature, O3, RH, total hydrocarbon, wind direction, and wind speed. Detailed information on the analytical systems for the CRAZ Varsity station can be found on their website (https://craz.ca/monitoring/info-calgary-nw/, last access: 3 June 2019). Figure 4Comparison of the hourly PM2.5 values between the low-cost PM sensor and SHARP. Based on 3050 hourly paired data points. The low-cost sensor has 250 hourly data points greater than 30 µg m−3. SHARP has 174 hourly data points greater than 20 µg m−3. Bars indicate the 25th and 75th percentile values, whiskers extend to values within 1.5 times the interquartile range (IQR), and dots represent values outside the IQR. The box plot explanation on the right is adjusted from DeCicco (2016). The meteorological parameters in this study measured by the SHARP instrument are presented in Table 1. Table 1Ambient conditions measured by SHARP. Figure 5PM2.5, relative humidity, and temperature data on the basis of a 24 h rolling average. Figure 6SHARP versus low-cost sensor PM2.5 concentration (µg m−3). The yellow dashed line is a 1:1 line. The solid blue line is a regression line. Panel (a) is in full scale, and panel (b) is a zoom-in plot of panel (a). The green circle represents data density. The following steps were taken to process the raw data from 00:00 on 7 December 2018 to 23:00 on 26 April 2019. 1. The 6 s interval data recorded by the low-cost sensor, including PM2.5, temperature, and RH, were averaged into hourly data to pair with SHARP data because only hourly SHARP data are publicly available. 2. The hourly sensor data and hourly SHARP data were combined into one structured data table. PM2.5, temperature, and RH by the low-cost sensor as well as PM2.5 by SHARP columns in the data table were selected. The data table then contains 3384 rows and four columns. Each row represents one hourly data point. The columns include the data measured by the low-cost sensor and the SHARP instrument. 3. Rows in the data table with missing values were removed – 299 missing values for PM2.5 from the low-cost sensor and 36 missing values for PM2.5 from the SHARP instrument. The reason for missing data from the SHARP instrument is the calibration. However, the reason for missing data from the low-cost sensor is unknown. 4. The data used for NN were transformed by z standardization with a mean of zero and a standard deviation of 1. After the above steps, the processed data table with 3050 rows and four columns was used for evaluation and calibration. The data file is provided in the Supplement to this paper. Each row represents one example or sample for training or testing by the calibration methods. ## 2.2 Low-cost sensor evaluation The Pearson correlation coefficient was used to compare the correlation for PM2.5 values between the low-cost sensor and the SHARP. SHARP was the reference method. The PM2.5 data by the low-cost sensor and SHARP were also compared using root mean square error (RMSE), mean square error (MSE), and mean absolute error (MAE). The Fligner and Killeen test (F–K test) was used to evaluate the equality (homogeneity) of variances for PM2.5 values between the low-cost sensor and the SHARP instrument (Fligner and Killeen, 1976). The F–K test is a superior option in terms of robustness and power when data are non-normally distributed, the population means are unknown, or outliers cannot be removed (Conover et al., 1981; de Smith, 2018). The null hypothesis of the F–K test is that all populations' variances are equal; the alternative hypothesis is that the variances are statistically significantly different. Figure 7PM2.5 versus relative humidity. Figure 8Data density comparison in the test dataset. Based on 610 test examples. NN: neural network, MLR: multiple linear regression, SLR: simple linear regression. PM2.5 data greater than 30 µg m−3 are not shown in the figure. See the box plot explanation in Fig. 4. Figure 9Data distribution comparison. Based on 610 test examples. NN: neural network, MLR: multiple linear regression, SLR: simple linear regression. Figure 10Performances of different calibration methods. Based on 610 test examples. NN: neural network, MLR: multiple linear regression, SLR: simple linear regression. ## 2.3 Calibration Four calibration methods were evaluated: SLR, MLR, XGBoost, and NN. Some predictions from the SLR, MLR, and XGBoost have negative values because they extrapolate observed values and regression is unbounded. When the predicted PM2.5 values generated by these calibration methods were negative, the negative values were replaced with the sensor data. MLR, XGBoost, and feedforward NN use the PM2.5, temperature, and RH data measured by the low-cost sensor as inputs. The PM2.5 measured by the SHARP instrument is used as the target to supervise the machine-learning process. The processed dataset, with 3050 rows and four columns, was randomly shuffled and then divided into a training set, which was composed of the data used to build models and minimize the loss function, and a test set, which was composed of the data that the model had never been run with before testing (Si et al., 2019). The test dataset was only used once and gave an unbiased evaluation of the final model's performance. The evaluation was to test the ability of the machine-learning model to provide sensible predictions with new inputs (LeCun et al., 2015). The training dataset had 2440 examples (samples). The test dataset had 610 examples (samples). ### 2.3.1 Simple linear regression and multiple linear regression The calibration by an SLR used Eq. (1). $\begin{array}{}\text{(1)}& \stackrel{\mathrm{^}}{y}=\phantom{\rule{0.125em}{0ex}}{\mathit{\beta }}_{\mathrm{0}}+{\mathit{\beta }}_{\mathrm{1}}×{\text{PM}}_{\mathrm{2.5}}\end{array}$ β0 and β1 are the model coefficients and were calculated using the training dataset; $\stackrel{\mathrm{^}}{y}$ is a model-predicted (calibrated) value. PM2.5 is the value measured by the low-cost sensor. The MLR used PM2.5, RH, and temperature measured by the low-cost sensor as predictors because the low-cost sensor only measured these parameters. The model is expressed as Eq. (2). $\begin{array}{}\text{(2)}& \stackrel{\mathrm{^}}{y}=\phantom{\rule{0.125em}{0ex}}{\mathit{\beta }}_{\mathrm{0}}+{\mathit{\beta }}_{\mathrm{1}}×{\text{PM}}_{\mathrm{2.5}}+{\mathit{\beta }}_{\mathrm{2}}×T+{\mathit{\beta }}_{\mathrm{3}}×\text{RH}\end{array}$ The model coefficients, β0 to β3, were calculated using the training dataset with SHARP-provided readings as $\stackrel{\mathrm{^}}{y}$. The outputs of the models generated by the SLR and MLR were evaluated by comparing to the SHARP readings in the test dataset. Table 2Calibration results by SLR and MLR using the test dataset. Note: the test dataset contains 660 examples. ### 2.3.2 XGBoost XGBoost is a scalable decision-tree-based ensemble algorithm, and it uses a gradient boosting framework (Chen and Guestrin, 2016). The XGBoost was implemented using the XGBoost (version 0.90) and scikit-learn (version 0.21.2) packages in Python (version 3.7.3). A random search method (Bergstra and Bengio, 2012) was used to tune the hyperparameters in the XGBoost algorithm, and the hyperparameters tuned include the following: • the number of trees to fit (n_estimator); • the maximum depth of a tree (max_depth); • the step size shrinkage used in an update (learning_rate); • the subsample ratio of columns when constructing each tree (colsample_bytree); • the minimum loss reduction required to make a further partition on a leaf node of the tree (gamma); • L2 regularization on weights (reg_lambda); and • the minimum sum of instance weight needed in a child (min_child_ weight). A detailed explanation of each hyperparameter is provided in the XGBoost documentation (XGBoost developers, 2019). The 10-fold cross-validation was used to select the best model with minimum MSE from the random search. The best model was then evaluated against the SHARP PM2.5 data using the test dataset. Figure 11Comparison between the NN predictions and SHARP. Based on 610 test examples. Panel (a) is in full scale. Panel (b) is a zoom-in plot of panel (a). The solid blue line is a regression line. The yellow dashed line is a 1:1 line. The green circle represents data density. The grey area along the regression line represents 1 standard deviation. Figure 12Comparison between the XGBoost predictions and SHARP. Based on 610 test examples. NN: neural network. Panel (a) is in full scale. Panel (b) is a zoom-in plot of panel (a). The solid blue line is a regression line. The yellow dashed line is a 1:1 line. The green circle represents data density. The grey area along the regression line represents 1 standard deviation. ### 2.3.3 Neural network A fully connected feedforward NN architecture was used in the study. In a fully connected NN, each unit (node) in a layer is connected to each unit in the following layer. Data from the input layer are passed through the network until the unit(s) in the output layer is (are) reached. An example of a fully connected feedforward NN is presented in Fig. 3. A back-propagation algorithm is used to minimize the difference between the SHARP-measured values and the predicted values (Rumelhart et al., 1986). The NN was implemented using the Keras (version 2.2.4) and TensorFlow (version 1.14.0) libraries in Python (version 3.7.3). Keras and TensorFlow were the most referenced deep-learning frameworks in scientific research in 2017 (RStudio, 2018). Keras is the front end of TensorFlow. The learning rate, L2 regularization rate, number of hidden layers, number of units in the hidden layers, and optimization methods were tuned using the random search method provided in the scikit-learn machine-learning library. A 10-fold cross-validation was used to evaluate the models. The model with the minimum MSE was considered to be the best-fit model and then used for model testing. 3 Results and discussion ## 3.1 Sensor evaluation ### 3.1.1 Hourly data The RMSE, MSE, and MAE between the low-cost sensor and SHARP for the hourly PM2.5 data were 10.58, 111.83, and 5.74. The Pearson correlation coefficient r value was 0.78. The PM2.5 concentrations by the sensor ranged from 0 to 178 µg m−3 with a standard deviation of 14.90 µg m−3 and a mean of 9.855 µg m−3. The PM2.5 concentrations by SHARP ranged from 0 to 80 µg m−3 with a standard deviation of 7.80 and a mean of 6.55 µg m−3. Both SHARP and the low-cost sensor dataset had a median of 4.00 µg m−3 based on hourly data (Fig. 4). The violin plot in Fig. 4 describes the distribution of the PM2.5 values measured by the low-cost sensor and SHARP using a density curve. The width of each curve represents the frequency of PM2.5 values at each concentration level. The p value from the F–K test was less than $\mathrm{2.2}×{\mathrm{10}}^{-\mathrm{16}}$, indicating that the variance of the PM2.5 values measured by the low-cost sensor was statistically significantly different from the variance of the PM2.5 values measured by the SHARP instrument. ### 3.1.2 24 h rolling average data Over 24 h, the median value for SHARP was 5.38 µg m−3, and for the low-cost sensor it was 5.01 µg m−3. Over 5 months (December 2018 to April 2019), the low-cost sensor tended to generate higher PM2.5 values compared to the SHARP monitoring data (Fig. 5) When PM2.5 concentrations were greater than 10 µg m−3, the low-cost sensor consistently produced values that were higher than the reference method (Fig. 6). When the concentrations were less than 10 µg m−3, the performance of the low-cost sensor was close to the reference method, producing slightly smaller values (Fig. 6) ## 3.2 Calibration by simple linear regression and multiple linear regression The RMSE was 4.91 calibrated by SLR and 4.65 by MLR (Table 2). The r value was 0.74 by SLR and 0.77 by MLR. The p values in the F–K test by the SLR and MLR were less than 0.05, which suggested that the variances of the PM2.5 values were statistically significantly different. ## 3.3 Calibration by XGBoost The hyperparameters selected by the random search for the best model using XGBoost are presented in Table 3. In the training dataset, the RMSE was 3.03, and the MAE was 1.93 by the best XGBoost model. The RMSE in the test dataset was reduced by 57.8 % using the XGBoost from 9.93 by the sensor to 4.19 (Table 4). The p value in the F–K test using the test dataset was 0.7256, which showed no evidence that the PM2.5 values varied with statistical significance between the XGBoost-predicted values and SHARP-measured values. ## 3.4 Calibration by neural network The hyperparameters for the best NN model are presented in Table 5. Table 3Hyperparameters for the best XGBoost model. Table 4Table 4: calibration results by XGBoost using the test dataset. Note: the test dataset contains 610 examples. Table 5Hyperparameters for the best neural network model. In the training dataset, the RMSE was 3.22, and the MAE was 2.17 by the best NN-based model. The RMSE was reduced by 60 % using the NN from 9.93 to 3.91 in the test dataset (Table 6). The p value in the F–K test was 0.43, which suggested that the variances in the PM2.5 values were not statistically significantly different between the NN-predicted values and SHARP-measured values. Table 6Calibration results by the neural network using the test dataset. Note: the test dataset includes 610 examples. Table 7Descriptive statistics by season. Note: (1) the mean is calculated by ${\sum }_{i=\mathrm{1}}^{n}\left(|\phantom{\rule{0.125em}{0ex}}\left({\text{sensor}}_{\text{daily}}\phantom{\rule{0.125em}{0ex}}-\phantom{\rule{0.125em}{0ex}}{\text{SHARP}}_{\text{daily}}\right)\mathrm{|}\right)/n$. ## 3.5 Discussion ### 3.5.1 Relative humidity impact RH has significant effects on the low-cost sensor's responses. The RH trend matched the low-cost sensor's PM2.5 trend closely. The spikes in the low-cost sensor's PM2.5 trend corresponded with increases in RH values, and the low-cost sensor tended to produce inaccurately high PM2.5 values when RH suddenly increased (Fig. 5). However, the relationship between PM2.5 and RH was not linear (Fig. 7) Table 8Descriptive statistics of PM2.5 concentrations using the test dataset. ### 3.5.2 Seasonal impact We assessed the seasonal impact on the low-cost sensor by comparing the means of absolute differences between the daily average of sensor values and the daily average of SHARP values in winter (December 2018 to February 2019) and spring (March 2019 to April 2019). A descriptive statistic is presented in Table 7. We used a two-sample t test to assess whether the means of absolute differences for winter and spring were equal. The p value of the t test was 0.754. Because $P=\mathrm{0.754}\mathit{>}\mathit{\alpha }=\mathrm{0.05}$, we retained the null hypothesis. There was not sufficient evidence at the α=0.05 level to conclude that the means of absolute differences between the low-cost sensor and SHARP values were significantly different for winter and spring. ### 3.5.3 Calibration assessment Descriptive statistics of the PM2.5 concentrations in the test dataset for SHARP, the low-cost sensor, XGBoost, NN, SLR, and MLR are presented in Table 8. The arithmetic mean of the PM2.5 concentrations measured by the low-cost sensor was 9.44 µg m−3. In contrast, the means of the PM2.5 concentrations were 6.44 µg m−3 by SHARP, 6.40 µg m−3 by XGBoost, and 6.09 µg m−3 by NN. NN and XGBoost produced data distributions that were similar to SHARP (Fig. 8). SLR had the worst performance. Fig. 9 shows that SLR could not predict low concentrations. The predictions made by NN and XGBoost ranged from 0.19 to 47.19 µg m−3 and from 0.00 to 39.94 µg m−3. In the test dataset, the NN produced the lowest MAE of 2.38 (Fig. 10). The MAEs were 2.63 by XGBoost, 3.09 by MLR, and 3.21 by SLR when compared with the PM2.5 data measured by the SHARP instrument. The NN also had the lowest RMSE score in the test dataset. The RMSEs were 3.91 for the NN, 4.19 for XGBoost, and 9.93 for the low-cost sensor (Fig. 10). The Pearson r value by the NN was 0.85 compared to 0.74 by the low-cost sensor. The XGBoost and NN machine-learning algorithms have a better performance compared to traditional SLR and MLR calibration methods. NN calibration reduced the RMSE by 60 %. Both NN and XGBoost demonstrated the ability to correct the bias for high concentrations made by the low-cost sensor (Figs. 11 and 12). Most of the values that were greater than 10 µg m−3 in the NN model fall closer to the yellow 1:1 line (Fig. 11). NN had slightly better performance for low concentrations compared to XGBoost. 4 Conclusions In this study, we evaluated one low-cost sensor against a reference instrument – SHARP – using 3050 hourly data points from 00:00 on 7 December 2018 to 23:00 on 26 April 2019. The p value from the F–K test suggested that the variances in the PM2.5 values were statistically significantly different between the low-cost sensor and the SHARP instrument. Based on the 24 h rolling average, the low-cost sensor in this study tended to report higher PM2.5 values compared to the SHARP instrument. The low-cost sensor had a strong bias when PM2.5 concentrations were greater than 10 µg m−3. The study also showed that the sensor's bias responses are likely caused by the sudden changes in RH. Four calibration methods were tested and compared: SLR, MLR, NN, and XGBoost. The p values from the F–K tests for the XGBoost and NN were greater than 0.05, which indicated that, after calibration by the XGBoost and the NN, the variances of the PM2.5 values were not statistically significantly different from the variance of the PM2.5 values measured by the SHARP instrument. In contrast, the p values from the F–K tests for the SLR and MLR were still less than 0.05. The NN generated the lowest RMSE score in the test dataset with 610 samples. The RMSE by NN was 3.91, the lowest of the four methods. RMSEs were 4.91 by SLR, 4.65 by MLR, and 4.19 by XGBoost. However, a wide installation of low-cost sensors may still face challenges, including the following. • Durability of the low-cost sensor. The low-cost sensor used in the study was deployed in the ambient environment. We installed four sensors between 7 December 2018 and 20 June 2019. Only one sensor lasted approximately 5 months; the data from this sensor were used in this study. The other three sensors only lasted 2 weeks to 1 month and collected limited data. These three sensors did not collect enough data for machine learning and were therefore not used in this study. • Missing data. In this study, the low-cost sensor dataset has 299 missing values for PM2.5 concentrations. The reason for the missing data is unknown. • Transferability of machine-learning models. The models developed by the two more powerful machine-learning algorithms that were used to calibrate the low-cost sensor data tend to be sensor-specific because of the nature of machine learning. Further research is needed to test the transferability of the models for broader use. Data availability Data availability. The hourly sensor data and hourly SHARP data are provided online at https://doi.org/10.5281/zenodo.3473833 (Si, 2019). Author contributions Author contributions. MS conducted the evaluation and calibrations. YX installed the sensor and monitored and collected the sensor data. MS and YX wrote the paper together and made an equal contribution. SD edited the machine-learning methods. KD secured the funding and supervised the project. All authors discussed the results and commented on the paper. Competing interests Competing interests. The authors declare that they have no conflict of interest. Disclaimer Disclaimer. Reference to any companies or specific commercial products does not constitute endorsement or recommendation by the authors. Acknowledgements Acknowledgements. The authors wish to thank SensorUp for providing the low-cost sensors and the Calgary Region Airshed Zone air quality program manager Mandeep Dhaliwal for helping with the installation of the PM sensors and a 4G LTE router, as well as the collection of the SHARP data. The authors would also like to thank Jessica Coles for editing an earlier version of this paper. Financial support Financial support. This research has been supported by the Natural Sciences and Engineering Research Council of Canada (grant nos. EGP 521823–17 and CRDPJ 535813-18). Review statement Review statement. This paper was edited by Keding Lu and reviewed by four anonymous referees. References Bergstra, J. and Bengio, Y.: Random Search for Hyper-Parameter Optimization, J. Mach. Learn. Res., 13, 281–305, 2012. CDNova Instrument Ltd.: SHARP Cost Estimate, Calgary, Canada, 2017. Charlson, R. J., Schwartz, S. E., Hales, J. M., Cess, R. D., Coakley, J. A., Hansen, J. E., and Hofmann, D. J.: Climate Forcing by Anthropogenic Aerosols, Science, 255, 423–430, https://doi.org/10.1126/science.255.5043.423, 1992. Chen, T. and Guestrin, C.: XGBoost: A Scalable Tree Boosting System, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining – KDD '16, 785–794, ACM Press, San Francisco, California, USA, 2016. Chong, C.-Y. and Kumar, S. P.: Sensor networks: Evolution, opportunities, and challenges, Proc. IEEE, 91, 1247–1256, https://doi.org/10.1109/JPROC.2003.814918, 2003. Chow, J. C. and Watson, J. G.: Guideline on Speciated Particulate Monitoring, available at: https://www3.epa.gov/ttn/amtic/files/ambient/pm25/spec/drispec.pdf (last access: 17 September 2019), 1998. Conover, W. J., Johnson, M. E., and Johnson, M. M.: A Comparative Study of Tests for Homogeneity of Variances, with Applications to the Outer Continental Shelf Bidding Data, Technometrics, 23, 351–361, https://doi.org/10.1080/00401706.1981.10487680, 1981. Cordero, J. M., Borge, R., and Narros, A.: Using statistical methods to carry out in field calibrations of low cost air quality sensors, Sensor Actuat. B-Chem., 267, 245–254, https://doi.org/10.1016/j.snb.2018.04.021, 2018. DeCicco, L.: Exploring ggplot2 boxplots – Defining limits and adjusting style, available at: https://owi.usgs.gov/blog/boxplots/ (last access: 18 September 2019), 2016. de Smith, M.: Statistical Analysis Handbook, 2018 Edition, The Winchelsea Press, Drumlin Security Ltd, Edinburgh, available at: http://www.statsref.com/HTML/index.html?fligner-killeen_test.html (last access: 7 September 2019), 2018. De Vito, S., Massera, E., Piga, M., Martinotto, L., and Di Francia, G.: On field calibration of an electronic nose for benzene estimation in an urban pollution monitoring scenario, Sensor Actuat. B-Chem., 129, 750–757, https://doi.org/10.1016/j.snb.2007.09.060, 2008. De Vito, S., Piga, M., Martinotto, L., and Di Francia, G.: CO, NO2 and NOx urban pollution monitoring with on-field calibrated electronic nose by automatic bayesian regularization, Sensor Actuat. B-Chem., 143, 182–191, https://doi.org/10.1016/j.snb.2009.08.041, 2009. De Vito, S., Esposito, E., Salvato, M., Popoola, O., Formisano, F., Jones, R., and Di Francia, G.: Calibrating chemical multisensory devices for real world applications: An in-depth comparison of quantitative machine learning approaches, Sensor Actuat. B-Chem., 255, 1191–1210, https://doi.org/10.1016/j.snb.2017.07.155, 2018. DigitalGlobe: ESRI World Imagery Basemap Service, Environmental Systems Research Institute (ESRI), Redlands, California USA, 2019. Esposito, E., De Vito, S., Salvato, M., Bright, V., Jones, R. L., and Popoola, O.: Dynamic neural network architectures for on field stochastic calibration of indicative low cost air quality sensing systems, Sensor Actuat. B-Chem., 231, 701–713, https://doi.org/10.1016/j.snb.2016.03.038, 2016. Fligner, M. A. and Killeen, T. J.: Distribution-Free Two-Sample Tests for Scale, J. Am. Stat. Assoc., 71, 210–213, https://doi.org/10.1080/01621459.1976.10481517, 1976. Government of Canada: National Air Pollution Surveillance (NAPS) Network – Open Government Portal, Natl. Air Pollut. Surveill. NAPS Netw., available at: https://open.canada.ca/data/en/dataset/1b36a356-defd-4813-acea-47bc3abd859b, last access: 17 September 2019. Holstius, D. M., Pillarisetti, A., Smith, K. R., and Seto, E.: Field calibrations of a low-cost aerosol sensor at a regulatory monitoring site in California, Atmos. Meas. Tech., 7, 1121–1131, https://doi.org/10.5194/amt-7-1121-2014, 2014. Jayaratne, R., Liu, X., Thai, P., Dunbabin, M., and Morawska, L.: The influence of humidity on the performance of a low-cost air particle mass sensor and the effect of atmospheric fog, Atmos. Meas. Tech., 11, 4883–4890, https://doi.org/10.5194/amt-11-4883-2018, 2018. Krizhevsky, A., Sutskever, I., and Hinton, G. E.: ImageNet classification with deep convolutional neural networks, Commun ACM, 60, 84–90, 2017. Kumar, P., Morawska, L., Martani, C., Biskos, G., Neophytou, M., Di Sabatino, S., Bell, M., Norford, L., and Britter, R.: The rise of low-cost sensing for managing air pollution in cities, Environ. Int., 75, 199–205, https://doi.org/10.1016/j.envint.2014.11.019, 2015. LeCun, Y., Bengio, Y., and Hinton, G.: Deep learning, Nature, 521, 436–444, https://doi.org/10.1038/nature14539, 2015. Lewis, A. C., Lee, J. D., Edwards, P. M., Shaw, M. D., Evans, M. J., Moller, S. J., Smith, K. R., Buckley, J. W., Ellis, M., Gillot, S. R., and White, A.: Evaluating the performance of low cost chemical sensors for air pollution research, Faraday Discuss., 189, 85–103, https://doi.org/10.1039/C5FD00201J, 2016. Lin, Y., Dong, W., and Chen, Y.: Calibrating Low-Cost Sensors by a Two-Phase Learning Approach for Urban Air Quality Measurement, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 2, 1–18, https://doi.org/10.1145/3191750, 2018. Loh, B. G. and Choi, G.-H.: Calibration of Portable Particulate Matter – Monitoring Device using Web Query and Machine Learning, Saf. Health Work, 10, S2093791119302811, https://doi.org/10.1016/j.shaw.2019.08.002, 2019. Maag, B., Zhou, Z., and Thiele, L.: A Survey on Sensor Calibration in Air Pollution Monitoring Deployments, IEEE Internet Things J., 5, 4857–4870, https://doi.org/10.1109/JIOT.2018.2853660, 2018. Papapostolou, V., Zhang, H., Feenstra, B. J., and Polidori, A.: Development of an environmental chamber for evaluating the performance of low-cost air quality sensors under controlled conditions, Atmos. Environ., 171, 82–90, https://doi.org/10.1016/j.atmosenv.2017.10.003, 2017. Patashnick, H. and Rupprecht, E. G.: Continuous PM10 Measurements Using the Tapered Element Oscillating Microbalance, J. Air Waste Manag. Assoc., 41, 1079–1083, https://doi.org/10.1080/10473289.1991.10466903, 1991. RStudio: Why Use Keras?, available at: https://keras.rstudio.com/articles/why_use_keras.html, last access: 11 November 2018. Rumelhart, D. E., Hinton, G. E., and Williams, R. J.: Learning representations by back-propagating errors, Nature, 323, 533–536, https://doi.org/10.1038/323533a0, 1986. Seinfeld, J. H. and Pandis, S. N.: Atmospheric chemistry and physics: from air pollution to climate change, Wiley, New York, 1998. Si, M.: Evaluation and Calibration of a Low-cost Particle Sensor in Ambient Conditions Using Machine Learning Methods (Version v0), Data set, Zenodo, https://doi.org/10.5281/zenodo.3473833, 2019. Si, M., Tarnoczi, T. J., Wiens, B. M., and Du, K.: Development of Predictive Emissions Monitoring System Using Open Source Machine Learning Library – Keras: A Case Study on a Cogeneration Unit, IEEE Access, 7, 113463–113475, https://doi.org/10.1109/ACCESS.2019.2930555, 2019. Snyder, E. G., Watkins, T. H., Solomon, P. A., Thoma, E. D., Williams, R. W., Hagler, G. S. W., Shelow, D., Hindin, D. A., Kilaru, V. J., and Preuss, P. W.: The Changing Paradigm of Air Pollution Monitoring, Environ. Sci. Technol., 47, 11369–11377, https://doi.org/10.1021/es4022602, 2013. Spinelle, L., Gerboles, M., Villani, M. G., Aleixandre, M., and Bonavitacola, F.: Field calibration of a cluster of low-cost available sensors for air quality monitoring. Part A: Ozone and nitrogen dioxide, Sensor Actuat. B-Chem., 215, 249–257, https://doi.org/10.1016/j.snb.2015.03.031, 2015. Spinelle, L., Gerboles, M., Villani, M. G., Aleixandre, M., and Bonavitacola, F.: Field calibration of a cluster of low-cost commercially available sensors for air quality monitoring. Part B: NO, CO and CO2, Sensor Actuat. B-Chem., 238, 706–715, https://doi.org/10.1016/j.snb.2016.07.036, 2017. Timbers, F.: Random Search for Hyper-Parameter Optimization, Finbarr Timbers, available at: https://finbarr.ca/random-search-hyper-parameter-optimization/ (last access: 4 October 2019), 2017. US EPA: List of designated reference and equivalent methods, available at: https://www3.epa.gov/ttnamti1/files/ambient/criteria/AMTIC List Dec 2016-2.pdf (last access: 7 October 2019), 2016. Wang, Y., Li, J., Jing, H., Zhang, Q., Jiang, J., and Biswas, P.: Laboratory Evaluation and Calibration of Three Low-Cost Particle Sensors for Particulate Matter Measurement, Aerosol Sci. Technol., 49, 1063–1077, https://doi.org/10.1080/02786826.2015.1100710, 2015. White, R., Paprotny, I., Doering, F., Cascio, W., Solomon, P., and Gundel, L.: Sensors and “apps” for community-based: Atmospheric monitoring, EM Air Waste Manag. Assoc. Mag. Environ. Manag., 36–40, 2012. Williams, R., Kaufman, A., Hanley, T., Rice, J., and Garvey, S.: Evaluation of Field-deployed Low Cost PM Sensors, U.S. Environmental Protection Agency, available at: https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab{\textdollar}={\textdollar}NERL&DirEntryId=297517 (last access: 17 September 2019), 2014. XGBoost developers: XGBoost Parameters – xgboost 1.0.0-SNAPSHOT documentation, available at: https://xgboost.readthedocs.io/en/latest/parameter.html (last access: 24 January 2020), 2019. Xiong, Y., Zhou, J., Schauer, J. J., Yu, W., and Hu, Y.: Seasonal and spatial differences in source contributions to PM2.5 in Wuhan, China, Sci. Total Environ., 577, 155–165, https://doi.org/10.1016/j.scitotenv.2016.10.150, 2017. Zheng, A.: Evaluating Machine Learning Models, First Edition., O'Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA, 2015. Zheng, T., Bergin, M. H., Johnson, K. K., Tripathi, S. N., Shirodkar, S., Landis, M. S., Sutaria, R., and Carlson, D. E.: Field evaluation of low-cost particulate matter sensors in high- and low-concentration environments, Atmos. Meas. Tech., 11, 4823–4846, https://doi.org/10.5194/amt-11-4823-2018, 2018. Zikova, N., Hopke, P. K., and Ferro, A. R.: Evaluation of new low-cost particle monitors for PM2.5 concentrations measurements, J. Aerosol Sci., 105, 24–34, https://doi.org/10.1016/j.jaerosci.2016.11.010, 2017. Zimmerman, N., Presto, A. A., Kumar, S. P. N., Gu, J., Hauryliuk, A., Robinson, E. S., Robinson, A. L., and Subramanian, R.: A machine learning calibration model using random forests to improve sensor performance for lower-cost air quality monitoring, Atmos. Meas. Tech., 11, 291–313, https://doi.org/10.5194/amt-11-291-2018, 2018.
### Calculate the average atomic weight when given isotopic weights and abundancesFifteen Examples To do these problems you need some information. To wit: (a) the exact atomic weight for each naturally-occuring stable isotope (b) the percent abundance for each isotope These values can be looked up in a standard reference book such as the "Handbook of Chemistry and Physics." The values can also be looked up via many online sources. The ChemTeam prefers to use Wikipedia to look up values. The unit associated with the answers to the problems below can be either amu or g/mol, depending on the context of the question. If it is not clear from the context that g/mol is the desired answer, go with amu (which means atomic mass unit). By the way, the most correct symbol for the atomic mass unit is u. The older symbol (which the ChemTeam grew up with) is amu (sometimes seen as a.m.u.) The unit amu is still in use, but you will see u used more often. This problem can also be reversed, as in having to calculate the isotopic abundances when given the atomic weight and isotopic weights. Study the tutorial below and then look at the tutorial linked just above. Example #1: Calculate the average atomic weight for carbon. mass number isotopic weight percent abundance 12 12.000000 98.93 13 13.003355 1.07 Solution: To calculate the average atomic weight, each isotopic atomic weight is multiplied by its percent abundance (expressed as a decimal). Then, add the results together and round off to an appropriate number of significant figures. (12.000000) (0.9893) + (13.003355) (0.0107) = 12.0107 amu This is commonly rounded to 12.011 or sometimes 12.01. The answers to problems like this tend to not follow strict significant figure rules. Consult a periodic table to see what manner of answers are considered acceptable. Example #2: Nitrogen mass number isotopic weight percent abundance 14 14.003074 99.636 15 15.000108 0.364 Solution: (14.003074) (0.9963) + (15.000108) (0.0037) = 14.007 amu (or 14.007 u) (isotopic weight) (abundance) + (isotopic weight) (abundance) = average atomic weight A point about the term 'atomic weight:' When discussing the atomic weight of an element, this value is an average. When discussing the atomic weight of an isotope, this value is a value that has been measured experimentally, not an average. Example #3: Silicon mass number isotopic weight percent abundance 28 27.976927 92.23 29 28.976495 4.67 30 29.973770 3.10 Solution: (27.976927) (92.23) + (28.976495) (4.67) + (29.973770) (3.10) = 2808.55 u There is a problem with the answer!! The true value is 28.086 u. Our answer is too large by a factor of 100. This is because I used percentages (92.23, 4.67, 3.10) and not the decimal equivalents (0.9223, 0.0467, 0.0310). To obtain the correct answer, we must divide by 100. Example #4: How to Calculate an Average Atomic Weight Solution: Two points: (1) notice I wrote the same number of decimal places in the answer as were in the isotopic weights (the 184.953 and the 186.956). This is common. (2) I forgot to put a unit on the answer, so 186.207 u would be the most correct answer. Example #5: In a sample of 400 lithium atoms, it is found that 30 atoms are lithium-6 (6.015 g/mol) and 370 atoms are lithium-7 (7.016 g/mol). Calculate the average atomic mass of lithium. Solution: 1) Calculate the percent abundance for each isotope: Li-6: 30/400 = 0.075 Li-7: 370/400 = 0.925 2) Calculate the average atomic weight: x = (6.015) (0.075) + (7.016) (0.925) x = 6.94 g/mol I put g/mol for the unit because that what was used in the problem statement. Example #6: A sample of element X contains 100 atoms with a mass of 12.00 and 10 atoms with a mass of 14.00. Calculate the average atomic mass (in amu) of element X. Solution: 1) Calculate the percent abundance for each isotope: X-12: 100/110 = 0.909 X-14: 10/110 = 0.091 2) Calculate the average atomic weight: x = (12.00) (0.909) + (14.00) (0.091) x = 12.18 amu (to four sig figs) 3) Here's another way: 100 atoms with mass 12 = total atom mass of 1200 10 atoms with mass 14 = total atom mass of 140 1200 + 140 = 1340 (total mass of all atoms) Total number of atoms = 100 + 10 = 110 1340/110 = 12.18 amu 4) The first way is the standard technique for solving this type of problem. That's because we do not generally know the specific number of atoms in a given sample. More commonly, we know the percent abundances, which is different from the specific number of atoms in a sample. Example #7: Boron has an atomic mass of 10.81 u according to the periodic table. However, no single atom of boron has a mass of 10.81 u. How can you explain this difference? Solution: 10.81 amu is an average, specifically a weighted average. It turns out that there are two stable isotopes of boron: boron-10 and boron-11. Neither isotope weighs 10.81 u, but you can arrive at 10.81 u like this: x = (10.013) (0.199) + (11.009) (0.801) x = 1.99 + 8.82 = 10.81 u It's like the old joke: consider a centipede and a snake. What's the average number of legs? Answer: 50. Of course, neither one has 50. Example #8: Copper occurs naturally as Cu-63 and Cu-65. Which isotope is more abundant? Solution: Look up the atomic weight of copper: 63.546 amu Since our average value is closer to 63 than to 65, we concude that Cu-63 is the more abundant isotope. Example #9: Copper has two naturally occuring isotopes. Cu-63 has an atomic mass of 62.9296 amu and an abundance of 69.15%. What is the atomic mass of the second isotope? What is its nuclear symbol? Solution: 1) Look up the atomic weight of copper: 63.546 amu 2) Set up the following and solve: (62.9296) (0.6915) + (x) (0.3085) = 63.546 43.5158 + 0.3085x = 63.546 0.3085x = 20.0302 x = 64.9277 amu 3) The nuclear symbol is: ${\text{}}_{29}^{65}\text{Cu}$ 4) You might see this 29-Cu-65 This is used in situations, such as the Internet, where the subscript/superscript notation cannot be reproduced. You might also see this: 65/29Cu Example #10: Naturally occurring iodine has an atomic mass of 126.9045. A 12.3849 g sample of iodine is accidentally contaminated with 1.0007 g of I-129, a synthetic radioisotope of iodine used in the treatment of certain diseases of the thyroid gland. The mass of I-129 is 128.9050 amu. Find the apparent "atomic mass" of the contaminated iodine. Solution: 1) Calculate mass of contaminated sample: 12.3849 g + 1.0007g = 13.3856 g 2) Calculate percent abundances of (a) natural iodine and (b) I-129 in the contaminated sample: (a) 12.3849 g / 13.3856 g = 0.92524 (b) 1.0007 g / 13.3856 g = 0.07476 3) Calculate "atomic mass" of contaminated sample: (126.9045) (0.92524) + (128.9050) (0.07476) = x x = 127.0540 amu Example #11: Neon has two major isotopes, Neon-20 and Neon-22. Out of every 250 neon atoms, 225 will be Neon-20 (19.992 g/mol), and 25 will be Neon-22 (21.991 g/mol). What is the average atomic mass of neon? Solution: 1) Determine the percent abundances (but leave as a decimal): Ne-20 ---> 225 / 250 = 0.90 Ne-22 ---> 25 / 250 = 0.10 The last value can also be done by subtraction, in this case 1 - 0.9 = 0.1 2) Calculate the average atomic weight: (19.992) (0.90) + (21.991) (0.10) = 20.19 Example #12: Calculate the average atomic weight for magnesium: mass number exact weight percent abundance 24 23.985042 78.99 25 24.985837 10.00 26 25.982593 11.01 The answer? Find magnesium on the periodic table: Remember that the above is the method by which the average atomic weight for the element is computed. No one single atom of the element has the given atomic weight because the atomic weight of the element is an average, specifically called a "weighted" average. See Example #7 and the example just below to see how this "no individual atom has the average weight" can be exploited. Example #13: Silver has an atomic mass of 107.868 amu. Does any atom of any isotope of silver have a mass of 107.868 amu? Explain why or why not. Solution: The specific question is about silver, but it could be any element. The answer, of course, is no. The atomic weight of silver is a weighted average. Silver is not composed of atoms each of which weighs 107.868. Example #14: Given that the average atomic mass of hydrogen in nature is 1.0079, what does that tell you about the percent composition of H-1 and H-2 in nature? Solution: It tells you that the proportion of H-1 is much much greater than the proportion of H-2 in nature. Example #15: The relative atomic mass of neon is 20.18 It consists of three isotopes with the masses of 20, 21and 22. It consists of 90.5% of Ne-20. Determine the percent abundances of the other two isotopes. Solution: 1) Let y% be the relative abundance of Ne-21. 2) Then, the relative abundance of Ne-22 is: (100 − 90.5 − y)% = (9.5 − y)% 3) Relative atomic mass of Ne (note use of decimal abundances, not percent abundances): (20) (0.905) + (21) (y) + (22) (0.095 − y) = 20.18 18.10 + 21y + 2.09 - 22y = 20.18 y = 0.010 Relative abundance of (note use of percents): Ne-21 = 1.0% Ne-22 = (9.5 − 1)% = 8.5% Bonus Example #1: There are only two naturally-occurring isotopes of bromine in equal abundance. An atom of one isotope has 44 neutrons. How many neutrons are in an atom of the other isotope? (a) 44     (b) 9     (c) 46     (d) 36     (e) 35 Solution: Choice (a): Isotopes have the same number of protons in each atom, but a different number of neutrons. The correct answer to this question is not the same number of neutrons that are in the other isotope. Choice (b): The number of neutrons in the various stable isotopes of a given element are almost always within a few neutrons of each other. Tin's ten stable isotopes span 12 neutrons; this is the largest span of stable isotopes the ChemTeam can think of without looking things up. Bismuth isotopes span 36 neutrons, but none of them are naturally-occurring (i.e., stable). Also, the number of neutrons in a given atom is always fairly close to how many protons there are. There are no cases of the number of neutrons being 26 less (or more) than the atomic number. Choice (c): This is the correct answer. It is different from 44 and it's only 2 away from 44. Choice (d) and (e): The span in numbers of neutrons of naturally-occurring isotopes is not 10 or 11. It is much less, usually a span of one, two, or three neutrons. While it is true that tin isotopes span 12 neutrons, there are 8 isotopes in between the lightest and the heaviest isotopes. The span between adjacent isotopes in tin is one or two neutrons. Bonus Example #2: Bromine has only two naturally-occurring isotopes of equal abundance. An atom of one isotope has 35 protons. How many protons are in an atom of the other isotope? (a) 44     (b) 36     (c) 46     (d) 37     (e) 35 Solution: This is a trick question. Both isotopes are atoms of the element bromine. All atoms of bromine, regardless of how many neutrons are present, contain the same number of protons.
## sarah_98 2 years ago ryan puts $500 into a bank account. the bank pays 5% compound interest per year. how much is the interest after one year? • This Question is Closed 1. ParthKohli Continuous compounding. $$\color{Black}{\Rightarrow 500(1 + 0.05) \large^n }$$ 2. ParthKohli $$n\textbf{ is the number of years.}$$ 3. sarah_98 ok 4. ParthKohli $$\color{Black}{\Rightarrow 500(1.05)^1 }$$ $$\color{Black}{\Rightarrow 500 \times 1.05 }$$ 5. sarah_98$525? 6. sarah_98 is it? 7. ParthKohli This is compounded interest. 8. sarah_98 im confused :s 9. ParthKohli And yes it is \$525 :) 10. sarah_98 oh ok there's part 2 11. ZhangYan lol sorry parth read mistake ;) 12. ParthKohli Interest = Amount - Principal Interest = 525 - 500 Interest = 25 13. sarah_98 work out the total amount he has in his bank account after 2 yrs 14. ParthKohli @ZhangYan if it's one year then both simple and compound may apply :) so your answer is correct but not the method :S 15. sarah_98 work out the total amount he has in his bank account after 2 yrs
threeparttable notes with 1.5 spacing. Want single spacing My main text uses one-one-half line spacing, and now the notes below the table are also one-one-half line spaced. I'd prefer them to feature single spacing (all of my other tables don't share this problem). If I were to simply use the tabular environment within a threeparttable, I'd get the desired look for my footnotes, but I can't seem to reproduce the single line spacing for this table. How do I fix the code for this table so that I can stretch the table across the width of the text on the page, but have my footnotes appear below the table with single spacing? Any help would be greatly appreciated. \documentclass{report} % packages \usepackage{amsmath} % Extra math definitions \usepackage{graphics} % PostScript figures \usepackage{setspace} % 1.5 spacing \usepackage{longtable,threeparttablex} % Tables spanning pages \usepackage{color} \usepackage{graphicx} \usepackage{rotating} \usepackage{array} \usepackage{multirow} \usepackage{booktabs} \usepackage[table]{xcolor} \usepackage{subcaption} \usepackage{pdfpages} \usepackage[notes,backend=bibtex]{biblatex-chicago} \usepackage[utf8]{inputenc} \usepackage[full]{textcomp} \usepackage[T1]{fontenc} \usepackage{afterpage} \usepackage{float} \usepackage{fp} \usepackage{pdflscape} %To rotate pages with sideways tables or large figures. \usepackage{xparse} \usepackage{siunitx} %Lets tables align columns by decimal point. \usepackage{lmodern} \usepackage[french,english]{babel} \usepackage{caption} \usepackage{pifont} \usepackage{microtype} \usepackage{amssymb} \usepackage{arydshln} \usepackage{cleveref} \usepackage[bottom]{footmisc} %This places footnotes at the bottom so figures will appear above footnotes. \newcommand{\tabitem}{~~\llap{\textbullet}~~} \DisableLigatures[f]{encoding=T1} \crefformat{footnote}{#2\footnotemark[#1]#3} %Let tables fit to width of page. \newcommand\totextwidth[1]{% \sbox{\mytabularbox}{#1}% \figwidthc=\wd\mytabularbox% \textwidthc=\textwidth% \FPdiv\scaleratio{\the\textwidthc}{\the\figwidthc}% \FPmin\scaleratio{\scaleratio}{1}% \scalebox{\scaleratio}{\usebox{\mytabularbox}}% } \begin {document} \onehalfspacing \afterpage{ \begin{ThreePartTable} \setlength{\LTleft}{0pt} \setlength{\LTright}{0pt} \renewcommand\TPTminimum{\textwidth} \renewcommand{\arraystretch}{0.8} \begin{TableNotes}[flushleft] \small \item \textsuperscript{a} \textit{Event Density} (1.5) refers to the number of notes identified in a 1.5 s window starting from the onset of cadential arrival. \item \textsuperscript{b} \textit{Caesura} refers to the presence of a rest across all four instrumental parts. \item \textsuperscript{c} \textit{Elision} refers to the superposition of a new intrathematic phrase at the moment of cadential arrival, an accompanimental overlap in the bass, or a melodic lead-in. \item \textsuperscript{d} \textit{Interthematic Function} refers to one of the following temporal functions to characterize the passage at the theme level following cadential arrival: Before-the-Beginning, Beginning, Middle, End, After-the-End. \textit{Intrathematic Function} refers to either the Beginning, Middle, or End functions that characterize the passage at the phrase level following cadential arrival. \end{TableNotes} \begin{longtable}{@{\hskip\tabcolsep\extracolsep\fill}lccc} \caption{Descriptive statistics for the 11 retrospective features.} \\ \toprule \textit{Retrospective Features} & \multicolumn{1}{c}{\textit{M} (\textit{SD})} & \multicolumn{1}{c}{\textit{Range}} & \multicolumn{1}{c}{\textit{Mode} (\textit{Frequency})} \\ \midrule \textbf{Segmentational Grouping} & & & \textbf{} \\ \multicolumn{1}{l}{\quad (1) \textit{Next Note Onset} (s)} & \multicolumn{1}{c}{.57 (.44)} & \multicolumn{1}{c}{.1-1.8} & \multicolumn{1}{c}{} \\ \multicolumn{1}{l}{\quad (2) \textit{Next Bass Note Onset} (s)} & \multicolumn{1}{c}{.97 (.77)} & \multicolumn{1}{c}{.1-2.8} & \multicolumn{1}{c}{} \\ \multicolumn{1}{l}{\quad (3) \textit{Next Soprano Note Onset} (s)} & \multicolumn{1}{c}{.99 (.66)} & \multicolumn{1}{c}{.1-2.6} & \multicolumn{1}{c}{} \\ \multicolumn{1}{l}{\quad (4) \textit{Event Density}\textsuperscript{a}} & \multicolumn{1}{c}{9.98 (5.25)} & \multicolumn{1}{c}{3-19} & \multicolumn{1}{c}{} \\ \multicolumn{1}{l}{\quad (5) \textit{Caesura}\textsuperscript{b}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Absent (28)} \\ \multicolumn{1}{l}{\quad (6) \textit{Elision}\textsuperscript{c}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{None (26)} \\ \textbf{Temporal Function} & \multicolumn{1}{c}{\textbf{}} & \multicolumn{1}{c}{\textbf{}} & \multicolumn{1}{c}{\textbf{}} \\ \multicolumn{1}{l}{\quad (7) \textit{Interthematic Function}\textsuperscript{d}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{End (20)} \\ \multicolumn{1}{l}{\quad (8) \textit{Intrathematic Function}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Beginning (19)} \\ \multicolumn{1}{l}{\quad (9) \textit{ Repetition}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Present (25)} \\ \multicolumn{1}{l}{\quad (10) \textit{Stimulus Length} (s)} & \multicolumn{1}{c}{15.68 (4.14)} & \multicolumn{1}{c}{9.2-27.6} & \multicolumn{1}{c}{} \\ \multicolumn{1}{l}{\quad (11) \textit{Stimulus Length from CA} (s)} & \multicolumn{1}{c}{7.20 (2.15)} & \multicolumn{1}{c}{3.6-14.1} & \multicolumn{1}{c}{} \\ \bottomrule \insertTableNotes \end{longtable} \label{tab:ex2rhetorical}% \end{ThreePartTable} } \end{document} • I have no idea what the result of that looks like (and I wrote half the code it's using) Please always make examples a complete document loading all needed packages so that people can run them locally. – David Carlisle Feb 2 '15 at 14:59 • what do you mean by "because I'm using longtable, which allows me to stretch the table to textwidth. " ? longtable has no features for controlling horizontal widths other than those inherited from the underlying tabular longtable just allows page breaks. – David Carlisle Feb 2 '15 at 15:01 • Okay, I've included all of the preamble material so you should be able to reproduce the table as I currently see it. – David S Feb 2 '15 at 15:36 (Aside: I updated/simplified significantly after the OP indicated the method he/she uses to change the default line spacing of the document as well as the packages needed to make the code compilable.) I have several suggestions: • Since you're using the setspace package and the instruction \onehalfspacing to change the default line spacing for the document as a whole, you could insert the instruction \singlespacing after the start of the ThreePartTable environment to switch locally to single-spacing. • The contents of the longtable environment are made needlessly complicated because of all those \multicolumn{1}{l}{...} and \multicolumn{1}{c}{...} wrappers. As far as I can tell, these wrappers are not needed. (All they do is create code clutter.) • Since you're using the threeparttablex package, you could write \tnote{a} and \item[a] instead of the more-laborious \textsuperscript{a} and \item \textsuperscript{a}, etc. Here's a simplified version of the example code you gave. I've commented out the instructions that load packages that aren't actually used in the MWE. By the way, there's no point in loading graphics and graphicx individually since the rotating package loads graphicx. Similarly, don't load color if you're also going to load xcolor. \documentclass{report} % packages %\usepackage{amsmath} % Extra math definitions %\usepackage{graphics} % PostScript figures \usepackage{setspace} % 1.5 spacing \usepackage{longtable,threeparttablex} % Tables spanning pages %\usepackage{color} %\usepackage{graphicx} %\usepackage{rotating} %\usepackage{array} %\usepackage{multirow} \usepackage{booktabs} %\usepackage[table]{xcolor} %\usepackage{subcaption} %\usepackage{pdfpages} %\usepackage[notes,backend=bibtex]{biblatex-chicago} \usepackage[utf8]{inputenc} %\usepackage[full]{textcomp} \usepackage[T1]{fontenc} \usepackage{afterpage} %\usepackage{float} %\usepackage{fp} %\usepackage{pdflscape} %To rotate pages with sideways tables or large figures. %\usepackage{xparse} %\usepackage{siunitx} %Lets tables align columns by decimal point. %\usepackage{lmodern} %\usepackage[french,english]{babel} %\usepackage{caption} %\usepackage{pifont} %\usepackage{microtype} %\usepackage{amssymb} %\usepackage{arydshln} %\usepackage[bottom]{footmisc} %This places footnotes at the bottom so figures will appear above footnotes. %\usepackage{cleveref} %\newcommand{\tabitem}{~~\llap{\textbullet}~~} %\DisableLigatures[f]{encoding=T1} %\crefformat{footnote}{#2\footnotemark[#1]#3} % %%Let tables fit to width of page. %\newcommand\totextwidth[1]{% %\sbox{\mytabularbox}{#1}% %\figwidthc=\wd\mytabularbox% %\textwidthc=\textwidth% %\FPdiv\scaleratio{\the\textwidthc}{\the\figwidthc}% %\FPmin\scaleratio{\scaleratio}{1}% %\scalebox{\scaleratio}{\usebox{\mytabularbox}}% %} \begin {document} \onehalfspacing \afterpage{% \begin{ThreePartTable} \singlespacing % switch locally to single-spacing \setlength{\LTleft}{0pt} \setlength{\LTright}{0pt} \renewcommand\TPTminimum{\textwidth} %\renewcommand{\arraystretch}{0.8} % not necessary, is it? \begin{TableNotes}[flushleft] \small \item[a] \textit{Event Density} (1.5) refers to the number of notes identified in a 1.5~s window starting from the onset of cadential arrival. \item[b] \textit{Caesura} refers to the presence of a rest across all four instrumental parts. \item[c] \textit{Elision} refers to the superposition of a new intrathematic phrase at the moment of cadential arrival, an accompanimental overlap in the bass, or a melodic lead-in. \item[d] \textit{Interthematic Function} refers to one of the following temporal functions to characterize the passage at the theme level following cadential arrival: Before-the-Beginning, Beginning, Middle, End, After-the-End. \textit{Intrathematic Function} refers to either the Beginning, Middle, or End functions that characterize the passage at the phrase level following cadential arrival. \end{TableNotes} \setlength\tabcolsep{0.1pt} % default value: 6pt \begin{longtable}{@{\extracolsep\fill}lccc} % make longtable span width of text block \caption{Descriptive statistics for the 11 retrospective features.} \label{tab:ex2rhetorical}\\ \toprule \textit{Retrospective Features} & \textit{M} (\textit{SD}) & \textit{Range} & \textit{Mode} (\textit{Frequency}) \\ \midrule \textbf{Segmentational Grouping} & & & \\ \quad (1) \textit{Next Note Onset} (s) & .57 (.44) & .1--1.8 & \\ \quad (2) \textit{Next Bass Note Onset (s)} & .97 (.77) & .1--2.8 & \\ \quad (3) \textit{Next Soprano Note Onset} (s) & .99 (.66) & .1--2.6 & \\ \quad (4) \textit{Event Density}\tnote{a} & 9.98 (5.25) & 3--19 & \\ \quad (5) \textit{Caesura}\tnote{b} & & & Absent (28) \\ \quad (6) \textit{Elision}\tnote{c} & & & None (26) \\ \textbf{Temporal Function} & & & \\ \quad (7) \textit{Interthematic Function}\tnote{d} & & & End (20) \\ \quad (8) \textit{Intrathematic Function} & & & Beginning (19) \\ \quad (9) \textit{Repetition} & & & Present (25) \\ \quad (10) \textit{Stimulus Length} (s) & 15.68 (4.14) & 9.2--27.6 & \\ \quad (11) \textit{Stimulus Length from CA} (s) & 7.20 (2.15) & 3.6--14.1 & \\ \bottomrule \insertTableNotes \end{longtable} \end{ThreePartTable} } \end{document} • Perhaps you should update your remarks to reflect the use of threeparttablex (don't know if you added that you the OPs example) – daleif Feb 2 '15 at 16:01 • @daleif - Thanks. I had revised my answer after the OP indicated that he/she was using the setspace package, but I didn't notice right away that he/she also posted some information about the other packages that are being loaded. I've now deleted the second bullet point from the answer. Incidentally, I seem to have no problems combining longtable and threeparttable (and hence don't find it necessary to load the threeparttablex package). Is that because of the recent updates to these packages? – Mico Feb 2 '15 at 16:15 • You cannot use longtable and threeparttable directly. The threparttable env is a block that cannot be broken. That is why threparttablex provide extra envs of a different name. Actually the ThreePartTable env does not do much more than making sure a certain macro exists. – daleif Feb 2 '15 at 16:41 • @daleif - Many thanks for this explanation. I realize now that this issue doesn't show up in the OP's code because the threeparttable in question doesn't span more than one page. – Mico Feb 2 '15 at 16:46 • Exactly the point – daleif Feb 2 '15 at 17:46
# What is the distribution of the 'achieved' $R^2$? I am interested in the distribution of/performing inference on the 'achieved' $R^2$ coefficient in multiple linear regression. Suppose that $y\sim x\beta + \epsilon$ with $\epsilon \sim \mathcal{N}\left(0,\sigma^2\right)$, where $x$ is a $p$-vector. You observe $x_1,x_2,\ldots,x_n$ and corresponding $y_1,y_2,\ldots,y_n$, with independent errors. You perform linear regression to get $\hat{\beta}$, and then compute the sample $R^2$ in the usual way: $R^2 = 1 - \frac{SS_\mbox{err}}{SS_{\mbox{tot}}}$. Nothing new here. There are known methods for hypothesis testing the population analogue of $R^2$, as well as computing confidence intervals on it. (I am thinking of Lee's paper, and followups by Algina, O'Brien, inter alios). The population analogue is defined in terms of the (unknown) true regression coefficients $\beta$ and the true variance $\sigma^2$. I am interested in the achieved $R^2$ which is the amount of variance explained by $\hat{\beta}$. Conditional on $x$ and on $\hat{\beta}$, I would define it as $$R^2_{\mbox{ach}} = 1 - \frac{\mbox{E}\left[\left(y - \hat{\beta}x\right)^2\right]}{\mbox{E}\left[\left(y-\bar{y}\right)^2\right]}.$$ Clearly this is less than the population $R^2$, because the true $\beta$ minimizes $\mbox{E}\left[\left(y - \beta x\right)^2\right]$. I would guess it is less than the sample $R^2$ because that (usually) overestimates the population $R^2$. I am having problems thinking about this quantity because it is both unobserved and random ($\hat{\beta}$ is a random variable.) I am guesing that, up to some transforms, I can represent this as a non-central chi-square, abusing $\hat{\beta}\sim\mathcal{N}\left(\beta,\sigma^2\left(X^{\top}X\right)^{-1}\right)$, and noting that $\mbox{Var}\left[y - \hat{\beta}x\right] = \sigma^2 + \mbox{Var}\left[(\hat{\beta} - \beta)x\right]$, but I'm being dense and lazy. (Also, treating $x$ as fixed will later be generalized to $x$ being random.) Is this is a well-known problem? For predicting future performance of a linear model, it would seem to be a more important quantity than the population $R^2$, for example. • Re "clearly this is less than": this statement appears to be equivalent to asserting that the OLS estimate of $\sigma^2$ is biased! Am I missing something here? – whuber Jun 12 '12 at 19:04 • @whuber, hmm. the achieved $R^2$ is computed in terms of the population parameters, so I'm not sure how the OLS estimate of $\sigma^2$ enters into it. – shabbychef Jun 12 '12 at 19:19 • ahh, I see the confusion; the $\mbox{Var}$ is conditional on $x$ and $\hat{\beta}$. That is, I have run the regression, computed $\hat{\beta}$ and then will use the regression model 'out of sample'. I can see how this is confusing. edit coming. – shabbychef Jun 12 '12 at 19:22 • the problem is that 'conditional on $x$' forces $y-\hat{\beta}x$ to be just a shift of $y$ and so $R^2_{\mbox{ach}}$ is just identically zero as defined. As noted, I will later have to consider $x$ as being normally distributed, in which case, I believe, the achieved $R^2$ as defined is no longer forced to be zero. bleah. – shabbychef Jun 12 '12 at 19:29
# Distance geometry Distance geometry is the characterization and study of sets of points based only on given values of the distances between member pairs.[1][2][3] More abstractly, it is the study of semimetric spaces and the isometric transformations between them. In this view, it can be considered as a subject within general topology.[4] Historically, the first result in distance geometry is Heron's formula in 1st century AD. The modern theory began in 19th century with work by Arthur Cayley, followed by more extensive developments in the 20th century by Karl Menger and others. Distance geometry problems arise whenever one needs to infer the shape of a configuration of points (relative positions) from the distances between them, such as in biology,[4] sensor network,[5] surveying, navigation, cartography, and physics. ## Introduction and definitions The concepts of distance geometry will first be explained by describing two particular problems. Consider three ground radio stations A, B, C, whose locations are known. A radio receiver is at an unknown location. The times it takes for a radio signal to travel from the stations to the receiver, ${\displaystyle {\displaystyle t_{A},t_{B},t_{C}}}$ , are unknown, but the time differences, ${\displaystyle {\displaystyle t_{A}-t_{B}}}$  and ${\displaystyle {\displaystyle t_{A}-t_{C}}}$ , are known. From them, one knows the distance differences ${\displaystyle c({\displaystyle t_{A}-t_{B}})}$  and ${\displaystyle c({\displaystyle t_{A}-t_{C}})}$ , from which the position of the receiver can be found. ### Second problem: dimension reduction In data analysis, one is often given a list of data represented as vectors ${\displaystyle \mathbf {v} =(x_{1},\cdots ,x_{n})\in \mathbb {R} ^{n}}$ , and one needs to find out whether they lie within a low-dimensional affine subspace. A low-dimensional representation of data has many advantages, such as saving storage space, computation time, and giving better insight into data. ### Definitions Now we formalize some definitions that naturally arise from considering our problems. #### Semimetric space Given a list of points on ${\displaystyle R=\{P_{0},\cdots ,P_{n}\}}$ , ${\displaystyle n\geq 0}$ , we can arbitrarily specify the distances between pairs of points by a list of ${\displaystyle d_{ij}>0}$ , ${\displaystyle 0\leq i . This defines a semimetric space: a metric space without triangle inequality. Explicitly, we define a semimetric space as a nonempty set ${\displaystyle R}$  equipped with a semimetric ${\displaystyle d:R\times R\to [0,\infty )}$  such that, for all ${\displaystyle x,y\in R}$ , 1. Positivity: ${\displaystyle d(x,y)=0}$    if and only if  ${\displaystyle x=y}$ . 2. Symmetry: ${\displaystyle d(x,y)=d(y,x)}$ . Any metric space is a fortiori a semimetric space. In particular, ${\displaystyle \mathbb {R} ^{k}}$ , the ${\displaystyle k}$ -dimensional Euclidean space, is the canonical metric space in distance geometry. The triangle inequality is omitted in the definition, because we do not want to enforce more constraints on the distances ${\displaystyle d_{ij}}$  than the mere requirement that they be positive. In practice, semimetric spaces naturally arises from inaccurate measurements. For example, given three points ${\displaystyle A,B,C}$  on a line, with ${\displaystyle d_{AB}=1,d_{BC}=1,d_{AC}=2}$ , an inaccurate measurement could give ${\displaystyle d_{AB}=0.99,d_{BC}=0.98,d_{AC}=2.00}$ , violating the triangle inequality. #### Isometric embedding Given two semimetric spaces, ${\displaystyle (R,d),(R',d')}$ , an isometric embedding from ${\displaystyle R}$  to ${\displaystyle R'}$  is a map ${\displaystyle f:R\to R'}$  that preserves the semimetric, that is, for all ${\displaystyle x,y\in R}$ , ${\displaystyle d(x,y)=d'(f(x),f(y))}$ . For example, given the finite semimetric space ${\displaystyle (R,d)}$  defined above, an isometric embedding into is defined by points ${\textstyle A_{0},A_{1},\ldots ,A_{n}\in {\displaystyle \mathbb {R} ^{k}}}$ , such that ${\displaystyle d(A_{i},A_{j})=d_{ij}}$  for all ${\displaystyle 0\leq i . #### Affine independence Given the points ${\textstyle A_{0},A_{1},\ldots ,A_{n}\in {\displaystyle \mathbb {R} ^{k}}}$ , they are defined to be affinely independent, iff they cannot fit inside a single ${\displaystyle l}$ -dimensional affine subspace of ${\displaystyle \mathbb {R} ^{k}}$ , for any ${\displaystyle l , iff the ${\displaystyle n}$ -simplex they span, ${\displaystyle v_{n}}$ , has positive ${\displaystyle n}$ -volume, that is, ${\displaystyle Vol_{n}(v_{n})>0}$ . In general, when ${\displaystyle k\geq n}$ , they are affinely independent, since a generic n-simplex is nondegenerate. For example, 3 points in the plane, in general, are not collinear, because the triangle they span does not degenerate into a line segment. Similarly, 4 points in space, in general, are not coplanar, because the tetrahedron they span does not degenerate into a flat triangle. When ${\displaystyle n>k}$ , they must be affinely dependent. This can be seen by noting that any ${\displaystyle n}$ -simplex that can fit inside ${\displaystyle \mathbb {R} ^{k}}$  must be "flat". ## Cayley–Menger determinants Cayley–Menger determinants, named after Arthur Cayley and Karl Menger, are determinants of matrices of distances between sets of points. Let ${\textstyle A_{0},A_{1},\ldots ,A_{n}}$  be n + 1 points in a semimetric space, their Cayley–Menger determinant is defined by ${\displaystyle CM(A_{0},\cdots ,A_{n})={\begin{vmatrix}0&d_{01}^{2}&d_{02}^{2}&\cdots &d_{0n}^{2}&1\\d_{01}^{2}&0&d_{12}^{2}&\cdots &d_{1n}^{2}&1\\d_{02}^{2}&d_{12}^{2}&0&\cdots &d_{2n}^{2}&1\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\d_{0n}^{2}&d_{1n}^{2}&d_{2n}^{2}&\cdots &0&1\\1&1&1&\cdots &1&0\end{vmatrix}}}$ If ${\textstyle A_{0},A_{1},\ldots ,A_{n}\in {\displaystyle \mathbb {R} ^{k}}}$ , then they make up the vertices of an (possibly degenerate) n-simplex ${\displaystyle {\displaystyle v_{n}}}$  in ${\displaystyle \mathbb {R} ^{k}}$ . It can be shown that[6] the n-dimensional volume of the simplex ${\displaystyle {\displaystyle v_{n}}}$  satisfies ${\displaystyle Vol_{n}(v_{n})^{2}={\frac {(-1)^{n+1}}{(n!)^{2}2^{n}}}CM(A_{0},\cdots ,A_{n})}$ . Note that, for the case of ${\displaystyle n=0}$ , we have ${\displaystyle Vol_{0}(v_{0})=1}$ , meaning the "0-dimensional volume" of a 0-simplex is 1, that is, there is 1 point in a 0-simplex. ${\textstyle A_{0},A_{1},\ldots ,A_{n}}$  are affinely independent iff ${\displaystyle Vol_{n}(v_{n})>0}$ , that is, ${\displaystyle (-1)^{n+1}CM(A_{0},\cdots ,A_{n})>0}$ . Thus Cayley–Menger determinants give a computational way to prove affine independence. If ${\displaystyle k , then the points must be affinely dependent, thus ${\displaystyle CM(A_{0},\cdots ,A_{n})=0}$ . Cayley's 1841 paper studied the special case of ${\displaystyle k=3,n=4}$ , that is, any 5 points ${\displaystyle A_{0},\cdots ,A_{5}}$  in 3-dimensional space must have ${\displaystyle CM(A_{0},\cdots ,A_{4})=0}$ . ## History The first result in distance geometry is Heron's formula, from 1st century AD, which gives the area of a triangle from the distances between its 3 vertices. Brahmagupta's formula, from 7th century AD, generalizes it to cyclic quadrilaterals. Tartaglia, from 16th century AD, generalized it to give the volume of tetrahedron from the distances between its 4 vertices. The modern theory of distance geometry began with Authur Cayley and Karl Menger.[7] Cayley published the Cayley determinant in 1841[8], which is a special case of the general Cayley–Menger determinant. Menger proved in 1928 a characterization theorem of all semimetric spaces that are isometrically embeddable in the n-dimensional Euclidean space ${\displaystyle \mathbb {R} ^{n}}$ .[9][10] In 1931, Menger used distance relations to give an axiomatic treatment of Euclidean geometry.[11] Leonard Blumenthal's book[12] gives a general overview for distance geometry at the graduate level, a large part of which is treated in English for the first time when it was published. ## Menger characterization theorem Menger proved the following characterization theorem of semimetric spaces:[2] A semimetric space ${\displaystyle (R,d)}$  is isometrically embeddable in the ${\displaystyle n}$ -dimensional Euclidean space ${\displaystyle \mathbb {R} ^{n}}$ , but not in ${\displaystyle \mathbb {R} ^{m}}$  for any ${\displaystyle 0\leq m , if and only if: 1. ${\displaystyle R}$  contains an ${\displaystyle (n+1)}$ -point subset ${\displaystyle S}$  that is isometric with an affinely independent ${\displaystyle (n+1)}$ -point subset of ${\displaystyle \mathbb {R} ^{n}}$ ; 2. any ${\displaystyle (n+3)}$ -point subset ${\displaystyle S'}$ , obtained by adding any two additional points of ${\displaystyle R}$  to ${\displaystyle S}$ , is congruent to an ${\displaystyle (n+3)}$ -point subset of ${\displaystyle \mathbb {R} ^{n}}$ . A proof of this theorem in a slightly weakened form (for metric spaces instead of semimetric spaces) is in [13]. ## Characterization via Cayley–Menger determinants The following results are proved in Blumethal's book[12]. ### Embedding ${\displaystyle n+1}$  points in ${\displaystyle \mathbb {R} ^{n}}$ Given a semimetric space ${\displaystyle (S,d)}$  , with ${\displaystyle S=\{P_{0},\cdots ,P_{n}\}}$ , and ${\displaystyle d(P_{i},P_{j})=d_{ij}\geq 0}$ , ${\displaystyle 0\leq i , an isometric embedding of ${\displaystyle (S,d)}$  into ${\displaystyle \mathbb {R} ^{n}}$  is defined by ${\textstyle A_{0},A_{1},\ldots ,A_{n}\in {\displaystyle \mathbb {R} ^{n}}}$ , such that ${\displaystyle d(A_{i},A_{j})=d_{ij}}$  for all ${\displaystyle 0\leq i . Again, one asks whether such an isometric embedding exists for ${\displaystyle (S,d)}$ . A necessary condition is easy to see: for all ${\displaystyle k=1,\cdots ,n}$ , let ${\displaystyle v_{k}}$  be the k-simplex formed by ${\textstyle A_{0},A_{1},\ldots ,A_{k}}$ , then ${\displaystyle (-1)^{k+1}CM(P_{0},\cdots ,P_{k})=(-1)^{k+1}CM(A_{0},\cdots ,A_{k})=2^{k}(k!)^{k}Vol_{k}(v_{k})^{2}\geq 0}$ The converse also holds. That is, if for all ${\displaystyle k=1,\cdots ,n}$ , ${\displaystyle (-1)^{k+1}CM(P_{0},\cdots ,P_{k})\geq 0}$ , then such an embedding exists. Further, such embedding is unique up to isometry in ${\displaystyle \mathbb {R} ^{n}}$ . That is, given any two isometric embeddings defined by ${\textstyle A_{0},A_{1},\ldots ,A_{n}}$ , and ${\textstyle A'_{0},A'_{1},\ldots ,A'_{n}}$ , there exists a (not necessarily unique) isometry ${\displaystyle T:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}$ , such that ${\displaystyle T(A_{k})=A'_{k}}$  for all ${\displaystyle k=0,\cdots ,n}$ . Such ${\displaystyle T}$  is unique if and only if ${\displaystyle CM(P_{0},\cdots ,P_{n})\neq 0}$ , that is, ${\textstyle A_{0},A_{1},\ldots ,A_{n}}$  are affinely independent. ### Embedding ${\displaystyle n+2}$  and ${\displaystyle n+3}$  points If ${\displaystyle n+2}$  points ${\displaystyle P_{0},\cdots ,P_{n+1}}$  can be embedded in ${\displaystyle \mathbb {R} ^{n}}$  as ${\displaystyle A_{0},\cdots ,A_{n+1}}$ , then other than the conditions above, an additional necessary condition is that the ${\displaystyle (n+1)}$ -simplex formed by ${\textstyle A_{0},A_{1},\ldots ,A_{n+1}}$ , must have no ${\displaystyle (n+1)}$ -dimensional volume. That is, ${\displaystyle CM(P_{0},\cdots ,P_{n},P_{n+1})=0}$ . The converse also holds. That is, if for all ${\displaystyle k=1,\cdots ,n}$ , ${\displaystyle (-1)^{k+1}CM(P_{0},\cdots ,P_{k})\geq 0}$ , and ${\displaystyle CM(P_{0},\cdots ,P_{n},P_{n+1})=0}$ , then such an embedding exists. For embedding ${\displaystyle n+3}$  points in ${\displaystyle \mathbb {R} ^{n}}$ , the necessary and sufficient conditions are similar: 1. For all ${\displaystyle k=1,\cdots ,n}$ , ${\displaystyle (-1)^{k+1}CM(P_{0},\cdots ,P_{k})\geq 0}$ ; 2. ${\displaystyle CM(P_{0},\cdots ,P_{n},P_{n+1})=0}$ ; 3. ${\displaystyle CM(P_{0},\cdots ,P_{n},P_{n+2})=0}$ ; 4. ${\displaystyle CM(P_{0},\cdots ,P_{n},P_{n+1},P_{n+2})=0}$ . ### Embedding arbitrarily many points The ${\displaystyle n+3}$  case turns out to be sufficient in general. In general, given a semimetric space ${\displaystyle (R,d)}$ , it can be isometrically embedded in ${\displaystyle \mathbb {R} ^{n}}$  if and only if there exists ${\displaystyle P_{0},\cdots ,P_{n}\in R}$ , such that, for all ${\displaystyle k=1,\cdots ,n}$ , ${\displaystyle (-1)^{k+1}CM(P_{0},\cdots ,P_{k})\geq 0}$ , and for any ${\displaystyle P_{n+1},P_{n+2}\in R}$ , 1. ${\displaystyle CM(P_{0},\cdots ,P_{n},P_{n+1})=0}$ ; 2. ${\displaystyle CM(P_{0},\cdots ,P_{n},P_{n+2})=0}$ ; 3. ${\displaystyle CM(P_{0},\cdots ,P_{n},P_{n+1},P_{n+2})=0}$ . And such embedding is unique up to isometry in ${\displaystyle \mathbb {R} ^{n}}$ . Further, if ${\displaystyle CM(P_{0},\cdots ,P_{n})\neq 0}$ , then it cannot be isometrically embedded in any ${\displaystyle \mathbb {R} ^{m},m . And such embedding is unique up to unique isometry in ${\displaystyle \mathbb {R} ^{n}}$ . Thus, Cayley–Menger determinants give a concrete way to calculate whether a semimetric space can be embedded in ${\displaystyle \mathbb {R} ^{n}}$ , for some finite ${\displaystyle n}$ , and if so, what is the minimal ${\displaystyle n}$ . ## Applications There are many applications of distance geometry.[3] In telecommunication networks such as GPS, the positions of some sensors are known (which are called anchors) and some of the distances between sensors are also known: the problem is to identify the positions for all sensors.[5] Hyperbolic navigation is one pre-GPS technology that uses distance geometry for locating ships based on the time it takes for signals to reach anchors. There are many applications in chemistry.[4][12] Techniques such as NMR can measure distances between pairs of atoms of a given molecule, and the problem is to infer the 3-dimensional shape of the molecule from those distances. Some software packages for applications are: ## References 1. ^ Yemini, Y. (1978). "The positioning problem — a draft of an intermediate summary". Conference on Distributed Sensor Networks, Pittsburgh. 2. ^ a b Liberti, Leo; Lavor, Carlile; MacUlan, Nelson; Mucherino, Antonio (2014). "Euclidean Distance Geometry and Applications". SIAM Review. 56: 3–69. arXiv:1205.0349. doi:10.1137/120875909. 3. ^ a b Mucherino, A.; Lavor, C.; Liberti, L.; Maculan, N. (2013). Distance Geometry: Theory, Methods and Applications. 4. ^ a b c Crippen, G.M.; Havel, T.F. (1988). Distance Geometry and Molecular Conformation. John Wiley & Sons. 5. ^ a b Biswas, P.; Lian, T.; Wang, T.; Ye, Y. (2006). "Semidefinite programming based algorithms for sensor network localization". ACM Transactions on Sensor Networks. 2 (2): 188–220. doi:10.1145/1149283.1149286. 6. ^ "Simplex Volumes and the Cayley-Menger Determinant". www.mathpages.com. Archived from the original on 16 May 2019. Retrieved 2019-06-08. 7. ^ Liberti, Leo; Lavor, Carlile (2016). "Six mathematical gems from the history of distance geometry". International Transactions in Operational Research. 23 (5): 897–920. arXiv:1502.02816. doi:10.1111/itor.12170. ISSN 1475-3995. 8. ^ Cayley, Arthur (1841). "On a theorem in the geometry of position". Cambridge Mathematical Journal. 2: 267–271. 9. ^ Menger, Karl (1928-12-01). "Untersuchungen über allgemeine Metrik". Mathematische Annalen (in German). 100 (1): 75–163. doi:10.1007/BF01448840. ISSN 1432-1807. 10. ^ Blumenthal, L. M.; Gillam, B. E. (1943). "Distribution of Points in n-Space". The American Mathematical Monthly. 50 (3): 181. doi:10.2307/2302400. JSTOR 2302400. 11. ^ Menger, Karl (1931). "New Foundation of Euclidean Geometry". American Journal of Mathematics. 53 (4): 721–745. doi:10.2307/2371222. ISSN 0002-9327. JSTOR 2371222. 12. ^ a b c Blumenthal, L.M. (1970). Theory and applications of distance geometry (2nd ed.). Bronx, New York: Chelsea Publishing Company. pp. 90–161. ISBN 978-0-8284-0242-2. LCCN 79113117. 13. ^ Bowers, John C.; Bowers, Philip L. (2017-12-13). "A Menger Redux: Embedding Metric Spaces Isometrically in Euclidean Space". The American Mathematical Monthly. 124 (7): 621. doi:10.4169/amer.math.monthly.124.7.621. S2CID 50040864.
# 2 + 4 + 7 + 11 + 16 + ... Question: 2 + 4 + 7 + 11 + 16 + ... Solution: Let $S_{n}$ be the sum of $n$ terms and $T_{n}$ be the $n$th term of the given series. Thus, we have: $S_{n}=2+4+7+11+16+\ldots+T_{n-1}+T_{n}$   ....(1) Equation (1) can be rewritten as $S_{n}=2+4+7+11+16+\ldots+T_{n-1}+T_{n}$   ...(2) On subtracting (2) from (1), we get: $\Rightarrow 2+\left[\frac{(n-1)}{2}(4+(n-2) 1)\right]-T_{n}=0$ $\Rightarrow 2+\left[\frac{(n-1)}{2}(n+2)\right]-T_{n}=0$ $\Rightarrow 2+\left[\frac{n^{2}+n}{2}-1\right]-T_{n}=0$ $\Rightarrow\left[\frac{n^{2}}{2}+\frac{n}{2}+1\right]=T_{n}$ $\because S_{n}=\sum_{k=1}^{n} T_{k}$ $\therefore S_{n}=\sum_{k=1}^{n}\left(\frac{k^{2}}{2}+\frac{k}{2}+1\right)$ $=\frac{1}{2} \sum_{k=1}^{n} k^{2}+\frac{1}{2} \sum_{k=1}^{n} k+\sum_{k=1}^{n} 1$ $=\frac{n(n+1)(2 n+1)}{12}+\frac{n(n+1)}{4}+n$ $=n\left(\frac{2 n^{2}+3 n+1+3 n+3+12}{12}\right)$ $=\frac{n}{12}\left(2 n^{2}+6 n+16\right)$ $=\frac{n}{6}\left(n^{2}+3 n+8\right)$
# Blinds for Homekit / Web For a long time we already had an automation for our blinds at home in the livingroom but at least twice a year, when there’s the change from summer to wintertime or the other way round, these things suck. Where’s the manual? I already did this a dozen times, can’t be so difficult… Turns out it is difficult. I believe these things are made to never be touched again. So far so good. Since the broadening of home automation solutions, especially HomeKit there was the need to add other devices without an official certification. For this is homebridge, but after a quick’n’dirty search for a plug and play solution i didn’t find anything very useful. Next step, order a Raspberry Pi zero, a double relay, read somewhere about how easy it is to use gpio with Python. ## Update 2018-11-29 : I just moved the Frontend into the main Repository so it’s much more transparent to see, whats going on there for everyone. ### First Steps So Python is an interpreted language, wich is very nice for hacking like this because, whenever you save a change to the file, your system gets restarted. My first approach was a flask boilerplate with some routes so you get an understanding on how this works there. @app.route('/status') def status() return "something" So the second step is how to make things blink. Nice thing about the common relays you get on ebay or amazon or somewhere else, they all have a LED for the activated relay. from gpiozero import LED pinNumber = 27 roll = LED(pinNumber) roll.on() # is now on... So if you’re a Sherlock you can do things like this: @app.route('/turn-on') def turnOn() roll.on() return "your LED is now on! aka something is moving!" ### Fine Tuning Sometimes when is really hot outside or the sun is shining on your glossy screen, you don’t want to close the blinds completely but rather just 50%. So for this case, the only known method to me is to make this over a time factor. Movement should be pretty constant, so moving 20% should require double the time of 10% moving. But because there is some difference between the upward and downward movement, you need two factors to calculate the time. So this is how i made it work: 1. travel = int(abs(currentHeight - desiredHeight)) 2. sleep(travel * config.upFactor) or config.downFactor But because this isn’t the completly perfect method, i reset the currentHeight whenever I want it 0 or 100. Lastly, Flask runs threaded on default, means for every new request it spins up a new thread. Since the HomeKit drag options makes a lot of calls when you move your finger slowly, i disabled this feature with app.run(debug=False, host='0.0.0.0', threaded=False). Flask also runs at localhost per default, so change that, too. ## Frontend Because I already had the vue-cli installed, i started with the minimal boilerplate which ships with it, deleted everything unnecessary and hacked a mobile-optimized button layout together. I see there some potential to make it more usable but does the work for now.. ## Recap: What I learned My biggest gain with this little project is that python is really easy to pick up, build a prototype and play around with it. I definetly have no idea how Python performs in a production enviroment where it counts on speed and security. But as long as i treat my home network as a secure enviroment, this shouldn’t become a problem. Second, Flask is a nice framework, easy to understand when you’d like to do something real quick. So a recommendation for a project like this. Because there are a lot of people who aren’t using Apple’s HomeKit I build my first Vue.js Frontend for this case where you have two big buttons and a slider for the percental settings (Link below). I played around with Vue for another project but it was a rather complex thing to start with it. I created a project with vue-cli and deleted everything i didn’t need, maybe not the best approach for such a simple thing, but I really like the approach to have your HTML, JS and CSS all in just one file. You can split up your Frontend in different submodules wich is very neat in my opinion! The exported project embedded in the main repository, ready to use! My repository
• # question_answer Direction: These questions are based on following graphs. Classification of appeared candidates in a competitive test from different States and qualified candidates from those States. Total appeared candidates = 45,000. Total qualified candidates = 9000. What is the ratio between number of candidates qualified from States B and D together and the number of candidates appeared from State ‘C’ respectively ? A) 8 : 37 B) 11 : 12       C) 37 : 40 D) 7 : 37 (c) No. of qualified candidates from States B and D =1440 +1890 = 3330 No. of appeared candidates from State C = 3600 3330 $\therefore$Required ratio = $\frac{3330}{3600}$ = 37:40
## negative sign $\Delta G^{\circ} = -nFE_{cell}^{\circ}$ Sophia Dinh 1D Posts: 100 Joined: Thu Jul 25, 2019 12:15 am ### negative sign why is there a negative sign in front of n in the standard Gibbs energy equation? Jessa Maheras 4F Posts: 121 Joined: Fri Aug 02, 2019 12:16 am ### Re: negative sign I believe that there is a negative sign due to the loss of energy. Indy Bui 1l Posts: 99 Joined: Sat Sep 07, 2019 12:19 am ### Re: negative sign I believe it has to do with the relationship between E and delta G. When E is positive, delta G should be negative meaning it is spontaneous. Wilson 2E Posts: 100 Joined: Sat Aug 17, 2019 12:18 am ### Re: negative sign Energy = -work/Charge; Rearrange this to get work = -E*Charge From the Farraday's constant, Charge = n*F Therefore work = -n*F*E At constant temp and pressure, work max = deltaG Therefore deltaG = -n*F*E Philip Posts: 100 Joined: Sat Sep 07, 2019 12:16 am ### Re: negative sign I think it's to show that energy is release or loss Posts: 45 Joined: Sat Aug 17, 2019 12:16 am ### Re: negative sign delta G*= -nFE w(max)=-nFE The system is doing work, so energy leaves the system resulting in -w. Jared_Yuge Posts: 100 Joined: Sat Aug 17, 2019 12:17 am ### Re: negative sign When E is positive it is spontaneous, so delta G needs to be negative Frankie Mele 3J Posts: 93 Joined: Wed Sep 30, 2020 10:09 pm ### Re: negative sign Favorable redox reactions have positive voltage differences which results in E being a positive value and deltaG therefore being negative. Lesly Lopez 3A Posts: 95 Joined: Wed Sep 30, 2020 9:54 pm Been upvoted: 1 time ### Re: negative sign The negative sign is important because it is telling you that the reaction has released or lost energy. The relationship between E and $\Delta$ G is that when E is positive, $\Delta$G should be negative meaning the reaction is spontaneous. Samantha Lee 1A Posts: 98 Joined: Wed Sep 30, 2020 10:05 pm Been upvoted: 1 time ### Re: negative sign Yes! There is a negative sign in front of the n. The equation is $\Delta G^{\circ } = -nFE^{^{\circ}}$ When the reaction is spontaneous, the $\Delta G^{\circ }$ is negative and $E^{\circ }$ is positive. In order for that to occur, there must be a negative sign in the equation. Kimiya Aframian IB Posts: 120 Joined: Wed Sep 30, 2020 9:34 pm ### Re: negative sign Sophia Dinh 1D wrote:why is there a negative sign in front of n in the standard Gibbs energy equation? Hi! I think the negative sign in front of the n (mole value) is because it keeps the relationship between E, delta G, and spontaneity consistent. By this I mean that we need the E value to be positive when delta G is negative. Hope this helps! Dominic Benna 2E Posts: 93 Joined: Wed Sep 30, 2020 10:09 pm ### Re: negative sign A spontaneous reaction has a positive delta E, yet a negative delta G. So, in order to relate the two to each other, the negative sign is added. Edison Tham 3D Posts: 40 Joined: Mon Jun 17, 2019 7:25 am Been upvoted: 1 time ### Re: negative sign The system is doing work and based on the fact that ∆G = maximum work, this would mean that there would be a negative sign.
× # How do you write 90% as a fraction? Nov 17, 2016 So we know that percentage, is a fraction itself. 90%=90/100 Simplify this and it becomes $\frac{9}{10}$. Hope this helps :) Jun 24, 2017 So, just to be clear, The percent symbol(%), means upon 100, so here it is 90%, which will mean $\frac{90}{100}$, which, lowered to its lowest terms, will be, $\frac{9}{10}$
Phonetics/음성학 # Japanese and Korean speakers’ production of Japanese fricative /s/ and affricate /ts/* Kimiko Yamakawa1,**, Shigeaki Amano2 1Faculty of Contemporary Culture, Shokei University, Kumamoto, Japan 2Faculty of Human Informatics, Aichi Shukutoku University, Aichi, Japan **Corresponding author : jin@shokei-gakuen.ac.jp © Copyright 2022 Korean Society of Speech Sciences. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Received: Jan 24, 2022; Revised: Mar 15, 2022; Accepted: Mar 15, 2022 Published Online: Mar 31, 2022 ## Abstract This study analyzed the pronunciations of Japanese fricative /s/ and affricate /ts/ by 24 Japanese and 40 Korean speakers using the rise and steady+decay durations of their frication part in order to clarify the characteristics of their pronunciations. Discriminant analysis revealed that Japanese speakers' /s/ and /ts/ were well classified by the acoustic boundaries defined by a discriminant function. Using this boundary, Korean speakers' production of /s/ and /ts/ was analyzed. It was found that, in Korean speakers' pronunciation, misclassification of /s/ as /ts/ was more frequent than that of /ts/ as /s/, indicating that both the /s/ and /ts/ distributions shift toward short rise and steady+decay durations. Moreover, their distributions were very similar to those of Korean fricatives and affricates. These results suggest that Korean speakers’ classification error might be because of their use of Korean lax and tense fricatives to pronounce Japanese /s/, and Korean lax and tense affricates to pronounce Japanese /ts/. Keywords: affricate; fricative; non-native speaker; production boundary ## 1. Introduction The Japanese language has a voiceless alveolar fricative /s/ and voiceless alveolar affricate /ts/ (Table 1) (cf. Kubozono, 2015; Vance, 2008). These two consonants have similar spectral features: Both consist of frication noise in a frequency region higher than about 4 kHz. However, they differ in intensity envelope. /s/ tends to have a gentle onset and a long sustained duration in the intensity envelope, whereas /ts/ tends to have a steep onset and a short sustained duration. Table 1. Voiceless fricatives and affricates in Japanese and Korean languages related to this study Language Fricative Affricate Alveolar Alveolo-palatal Alveolar Alveolo-palatal Japanese s ɕ ts Korean Lax Tense Lax Tense Aspirated s s* * h Unlike the Japanese language, the Korean language distinguishes a lax alveolar fricative /s/ and tense alveolar fricative /s*/ (Table 1) (Ha et al., 2009; Shin, 2015). /s/ has a longer frication duration than /s*/ at a word-initial position (Kang, 2000; Shin, 2015). /s*/ has a longer aspiration duration than /s/ at a word-initial position but it has no aspiration at a word-medial position (Shin, 2015). Additionally, the centroid of the fricative noise is lower for /s/ than for /s*/ (Cho et al., 2002). Further, in contrast to Japanese, Korean does not have an alveolar affricate /ts/. However, it does have a lax alveolo-palatal affricate /tɕ/, tense alveolo-palatal affricate /tɕ*/, and aspirated alveolo-palatal affricate /tɕh/ (Ha et al., 2009; Shin, 2015). At a word-initial position, /tɕ*/, /tɕ/, and /tɕh/ have a short, medium, and long frication duration, respectively (Shin, 2015). At a word medial position, this order of frication duration is retained. However, /tɕ/ can be pronounced as a voiced affricate between voiced sounds. In addition, /tɕ/, /tɕh/, and /tɕ*/ have a short, medium, and long closure duration, respectively (Shin, 2015). The perceptual assimilation mode for second language learners (PAM-L2) proposed by Best & Tyler (2007) predicts that speakers of native language (L1) have difficulties in discriminating phonemes in a foreign language (L2) when two phonemes in L2 are perceived to one phoneme in L1 with equal goodness or when L2 phonemes are not perceived as any L1 phoneme. According to PAM-L2 expectations, it is often observed that non-native speakers of any language have difficulty in pronouncing a foreign-language phoneme that does not exist in their first language. For example, Japanese speakers struggle to correctly pronounce English /l/ and /r/ that does not exist in Japanese (Zimmermann et al., 1984). Since the Korean language does not have an alveolar affricate /ts/ (Table 1), Korean speakers may have difficulty pronouncing Japanese /ts/. Actually, previous studies conducting a questionnaire survey for Japanese teachers on non-native Japanese learners (e.g., Matsuzaki, 1999; Sukegawa, 1993) reported that Korean speakers are not good at distinguishing Japanese /s/ and /ts/. There are two types of pronunciation error: mispronunciation of /s/ as /ts/ (hereafter /s/→/ts/ error) and mispronunciation of /ts/ as /s/ (hereafter /ts/→/s/ error). The previous studies (e.g., Matsuzaki, 1999; Sukegawa, 1993) often reported the /ts/→/s/ error whereas rarely the /s/→/ts/ error. However, this cannot be evidence that the /s/→/ts/ error never occurs. Previous studies possibly overlooked the /s/→/ts/ error because the /ts/→/s/ error draws Japanese teachers’ attention, and hence it might mask the occurrence of the /s/→/ts/ error. Another problem of previous studies (e.g., Matsuzaki, 1999; Sukegawa, 1993) is that they mainly investigated the occurrence and characteristics of the error of /s/ and /ts/, but not the error in terms of acoustic features. The acoustic features related to the cause of Korean speakers’ mispronunciation have not been clarified. As for the acoustic features of /s/ and /ts/, Yamakawa et al. (2012) analyzed the intensity envelope of /s/ and /ts/ pronounced by native Japanese speakers and developed a method to distinguish the two consonants. They divided the intensity envelope into the rise, steady, and decay components, and then approximated these three components with lines of positive, zero, and negative slope, respectively (Figure 1). Yamakawa et al. (2012) demonstrated that /s/ and /ts/ are discriminated with a small error (1.2−6.1%) by a linear function with two variables: the rise duration and the sum of the steady and decay durations (hereafter referred to as “steady+decay”). Their results indicate that the rise and steady+decay durations are relevant acoustic features to distinguish Japanese /s/ and /ts/. Yamakawa & Amano (2015) demonstrated that the method of Yamakawa et al. (2012) is applicable to the distinction between fricative /ɕ/ and affricate /tɕ/. Namely, they showed that these consonants are separated with low confusion errors when using the rise and steady+decay durations. Figure 1. Schematic diagram of the intensity envelope and the duration of the consonant parts for analysis. Based on these backgrounds, this study aimed to clarify the acoustic characteristics of Japanese fricative /s/ and affricate /ts/ pronounced by the introductory level of Korean learners of Japanese using the two variables (rise and steady+decay durations) proposed by Yamakawa et al. (2012). In this study, the “production boundary” is defined as an acoustic boundary obtained by discriminant analysis using the rise and steady+decay durations. The “pronunciation error” is defined as a classification error using the production boundary as a classifier. This study firstly obtained Japanese and Korean speakers’ mapping of /s/ and /ts/ and their production boundaries on a coordinate plane of the rise and steady+decay durations. Then, using the Japanese speakers’ production boundary, Korean speakers’ pronunciation error of Japanese /s/ and /ts/ was identified to examine their characteristics. To obtain further information about the cause of pronunciation error, this study also analyzed Korean fricatives and affricates in a monosyllable pronounced by Korean speakers. ## 2. Speech Recording 2.1. Participants The participants in the experiment were 24 Japanese speakers (12 males and 12 females) and 40 Korean speakers (20 males and 20 females). Their average age was 26.2 years [Minimum(Min)=21, Maximum(Max)=30, standard deviation(SD)=3.2] for the Japanese speakers, and 24.2 years (Min=19, Max=29, SD=2.6) for the Korean speakers. The Korean speakers were Japanese learners at the beginner level. They had learned the Japanese language for an average of 121 hours (Min=18, Max=300, SD=75.2) and had never lived in Japan. The participants were paid for their participation. 2.2. Word Materials The word materials used for recording were four Japanese minimal pair words in 1−4 morae long (Table 2), having the same phoneme sequence except that their initial phoneme was /s/ or /ts/. Each item of the minimal pair words had the same accent pattern and similar auditory word familiarity (Amano & Kondo, 1999). These characteristics are desirable for a speech production experiment because the word materials are probably not affected by the difference in phoneme sequence, accent pattern, or word familiarity. The word materials served for both Japanese and Korean speakers’ recordings. Table 2. Minimal pair words with an initial phoneme /s/ or /ts/. Word meaning and auditory word familiarity are shown in parentheses and brackets, respectively Word length (mora) Accent pattern Minimal pair words /s/ word /ts/ word 1 H /sɯ/ (vinegar) [4.78] /tsɯ/ (harbor) [3.94] 2 LH /sɯrɯ/ (do) [5.19] /tsɯrɯ/ (fish) [5.34] 3 LHL /sɯnerɯ/ (sulk) [5.66] /tsɯnerɯ/ (pinch) [5.78] 4 LHLL /sɯmaɡoto/ (single-string harp) [2.22] /tsɯmaɡoto/ (multi-string harp) [2.53] 2.3. Monosyllable Materials In addition to these word materials, Korean monosyllables with fricative or affricate consonant were used for recordings. The consonants were lax fricative /s/ (ㅅ), tense fricative /s*/ (ㅆ), lax affricate /tɕ/ (ㅈ), and tense affricate /tɕ*/ (ㅉ). The vowels were /ɯ/ (ㅡ) and /u/ (ㅜ). All combinations of these four consonants and two vowels yielded eight consonant-vowel type monosyllables. The monosyllable materials served only for the Korean speakers’ recordings. 2.4. Procedure The Japanese speakers participated in speech recordings in a quiet room at the National Institute of Informatics or at the NTT Human Interface Laboratories in Tokyo, Japan. The Korean speakers participated in the recordings in a quiet room at the Medialab recording studio or at Hongik University in Seoul, Korea. For word recordings, one of the word materials was presented at the center of a computer screen in hiragana characters in each trial. Similarly, for monosyllable recordings, one of the monosyllable materials was presented in Hangul characters (i.e., one of 스, 수, 쓰, 쑤, 즈, 주, 쯔, and 쭈). Speakers were asked to pronounce the word or monosyllable at a normal speaking rate. Their pronunciation was digitally recorded using a microphone (ECM-999, SONY, Tokyo, Japan) and an A/D converter (UA25-EX, Roland, Hamamatsu, Japan) with 16-bit quantization and 48-kHz sampling frequency, and stored as a digital audio file on a computer. The word materials were recorded four times in a random order for each participant. Namely, there were 32 recording trials for each participant. Meanwhile, the monosyllable materials were recorded only once. Their recording order was randomized for each participant. The word materials were recorded first, and then the monosyllable materials were recorded. ## 3. Analysis 3.1. Japanese Speakers The rise and steady+decay durations of /s/ and /ts/ in a word pronounced by the Japanese speakers were obtained using the estimation method proposed by Yamakawa et al. (2012). That is, the intensity envelope of frication of /s/ and /ts/ was approximated with three lines of rise, steady, and decay parts (Figure 1) by minimizing a squared error between the envelope and lines. Then, the rise and steay+decay durations were identified according to the approximated lines. Table 3 shows the mean and standard deviation of the rise and steady+decay durations of /s/ and /ts/ pronounced by Japanese speakers. Table 3. M and SD of the rise and steady+decay durations (ms) of Japanese /s/ and /ts/ pronounced by Japanese and Korean speakers. The number of data is also shown Phoneme Japanese Korean M SD M SD M SD M SD /s/ 384 76.3 33.0 101.1 30.4 640 70.8 29.7 99.9 32.3 /ts/ 384 37.3 24.7 63.7 26.2 640 32.9 23.4 48.1 31.1 M, mean; SD, standard deviation. Discriminant analysis for /s/ and /ts/ was conducted using the rise and steady+decay durations as independent variables and the labels /s/ and /ts/ as the dependent variable. The discriminant function of /s/ and /ts/ for the Japanese speakers was obtained as Equation 1. $f=0.038x+0.039y-5.414$ (1) Where f is the predicted label, x is the rise duration (ms), and y is the steady+decay duration (ms). The discriminant error (regarded as pronunciation error) of the Japanese speakers’ /s/ and /ts/ was 6.25% (Table 4). This low error ratio indicates that the discriminant analysis was successful. Table 4. The Japanese and Korean speakers’ pronunciation error ratio (%) of /s/ and /ts/ based on the Japanese speakers’ production boundary Speaker Error type /s/→/ts/ /ts/→/s/ All Japanese 8.33 (384) 4.17 (384) 6.25 (768) Korean 19.06 (640) 6.25 (640) 12.66 (1,280) The population of the pronounced item is shown in parentheses. By substituting zero for f in Equation 1, the production boundary of /s/ and /ts/ in the Japanese speakers was obtained as Equation 2. $0=0.038x+0.039y-5.414$ (2) Figure 2 shows the Japanese speakers’ /s/ and /ts/ as well as their production boundary (Equation 2). The tokens of /s/ and /ts/ are well separated by the production boundary, which corresponds to the low errors described above. The average speaking rate of the Japanese speakers was 4.52 mora/s (SD=0.82 mora/s). Figure 2. The Japanese speakers’ /s/ and /ts/ plotted on a coordinate plane of rise and steady+decay durations. The solid line represents the production boundary of /s/ and /ts/ (Equation 2). 3.2. Korean Speakers As with the Japanese speakers, the rise and steady+decay durations of /s/ and /ts/ in a word pronounced by the Korean speakers were obtained using the estimation method proposed by Yamakawa et al. (2012). The mean and SD of the rise and steady+decay durations of /s/ and /ts/ pronounced by Korean speakers are shown in Table 3. Korean speakers’ discriminant function and production boundary were obtained as Equations 3 and 4, respectively. $f=0.029x+0.027y-3.464$ (3) $0=0.029x+0.027y-3.464$ (4) where f is the predicted label, x is the rise duration (ms), and y is the steady+decay duration (ms). The discriminant error defined by Equation 3 was 10.47%. Using the discriminant function of the Japanese speakers (Equation 1), the pronunciation errors of /s/ and /ts/ by the Korean speakers were identified. Namely, if f in Equation 1 was lower than zero for a /s/ item, it was identified as the /s/→/ts/ error, whereas if f was higher than zero for a /ts/ item, it was identified as a /ts/→/s/ error. The ratios of these errors made by the Korean speakers are shown in Table 4. Their difference was tested with the z-test for two proportions. The ratio difference was significant between the Japanese and Korean speakers for the /s/→/ts/ error (z=4.65, p<.001) and for all errors (z=4.63, p<.001) but not for the /ts/→/s/ error. These results indicate that Korean speakers made more errors than Japanese speakers in the /s/→/ts/ error and all errors. The ratio difference was significant between /s/→/ts/ and /ts/→/s/ errors in the Japanese speakers (z=2.38, p<.05) and the Korean speakers (z=6.89, p<.001). These results indicate that the Japanese and Korean speakers made more errors in the /s/→/ts/ error than in the /ts/→/s/ error. Figure 3 shows the Korean speakers’ /s/ and /ts/ and the production boundary of the Japanese speakers (Equation 2) and the Korean speakers (Equation 4). Many tokens of /s/ are plotted under the Japanese production boundary, which corresponds to the high /s/→/ts/ error ratio of the Korean speakers in Table 4. The average speaking rate of the Korean speakers was 3.96 mora/s (SD=1.21 mora/s), which is significantly slower than the speaking rate of the Japanese speakers [t(2,046)=10.55, p<.001]. Figure 3. The Korean speakers’ /s/ and /ts/ plotted on a coordinate plane of the rise and steady+decay durations. The solid and broken lines respectively represent the Japanese and Korean speakers’ production boundaries (Equations 2 and 4). The rise and steady+decay durations of the Korean consonants /s/, /s*/, /tɕ/, and /tɕ*/ in a monosyllable pronounced by the Korean speakers were obtained using the estimation method proposed by Yamakawa et al. (2012). Figure 4 shows a scattergram of the consonants on a coordinate plane of the rise duration and the steady+decay duration. The production boundaries of the Japanese speakers (Equation 2) and the Korean speakers (Equation 4) were also plotted. The combined distribution of /s/ and /s*/ was very similar to that of the Japanese consonant /s/ pronounced by the Korean speakers in Figure 3. In addition, the combined distribution of /tɕ/ and /tɕ*/ had very similar distributions to that of the Japanese consonant /ts/. Figure 4. The Korean speakers’ lax and tense fricatives (/s/ and /s*/) and lax and tense affricates (/tɕ/ and /tɕ*/) in a Korean monosyllable plotted on a coordinate plane of the rise and steady+decay durations. The solid and broken lines respectively represent the Japanese and Korean speakers’ production boundaries (Equations 2 and 4) of /s/ and /ts/ at an initial position of Japanese words. When Korean fricatives (/s/, /s*/) and affricates (/tɕ/, /tɕ*/) were categorized by the Japanese speakers’ boundary of /s/ and /ts/ (Equation 2), the pronunciation error ratio was 7.5% for the fricative→affricate error, 3.8% for the affricate→fricative error, and 5.6% as a whole. Namely, as with the results in Table 4, the fricative→affricate error ratio was higher than the affricate→fricative error ratio. While, when Korean consonants /s/, /s*/, /tɕ/, and /tɕ*/ were categorized by the Korean speakers’ boundary of /s/ and /ts/ (Equation 4), the error ratio was 2.5% for the fricative→affricate error, 6.7% for the affricate→fricative error, and 4.7% as a whole. The low ratio of overall error indicated that the Korean fricatives and affricates in a monosyllable were well discriminated with the boundary derived from the Korean speakers’ /s/ and /ts/ in a word. ## 4. Discussion This study investigated Korean speakers’ /s/ and /ts/ production using the rise and steady+decay durations estimated by the method proposed by Yamakawa et al. (2012), and analyzed their acoustic characteristics in terms of Japanese speakers’ production boundary of /s/ and /ts/. The results indicated that Korean speakers at the beginner level make more pronunciation errors of /s/ and /ts/ than Japanese speakers, showing that Korean speakers are not good at realizing Japanese /s/ and /ts/. Their low ability in discriminating /s/ and /ts/ is reasonable because the Korean language does not have /ts/. Furthermore, the results indicated that Korean speakers make more /s/→/ts/ errors than /ts/→/s/ errors (Table 4) which means that their pronunciation of /s/ is biased toward /ts/ that has short rise and steady+decay durations. Korean speakers’ pronunciation of /ts/ is similarly biased. Their /ts/ in Figure 3 is distributed nearer to the origin of the coordinate axes than the Japanese speakers’ /ts/ in Figure 2. This result indicates that Korean speakers pronounce /ts/ with shorter durations of the rise and steady+decay part than Japanese speakers do. Taken together, both distributions of /s/ and /ts/ by the Korean speakers are shifted toward the origin compared to those of the Japanese speakers. Because of this shift, the Korean speakers’ production boundary (Equation 4) is located on the lower-left side (i.e., the origin side) of the Japanese speakers’ production boundary (Equation 2) in Figure 3. What causes the shift of Korean speakers’ /s/ and /ts/? One possible cause for the shift of /s/ is that Korean speakers use Korean lax and tense fricatives (/s/ and /s*/) for Japanese /s/. This notion is supported by the results of /s/ and /s*/ in Korean monosyllables in Figure 4. The combined distribution of /s/ and /s*/ is similar to the distribution of Japanese /s/ in Figure 3. Furthermore, when the Japanese speakers’ production boundary of /s/ and /ts/ (Equation 2) was applied for the discrimination of Korean fricatives (/s/ and /s*/) and affricates (/tɕ/ and /tɕ*/), the fricative→affricate error ratio was higher than the affricate→fricative error ratio. A similar tendency is seen in the error ratios for Korean speakers’ /s/ and /ts/ in Table 4. These results suggest that Korean speakers use Korean fricatives /s/ and /s*/ to pronounce Japanese /s/. However, the /s/ and /s*/ in Figure 4 intrinsically have a different distribution from Japanese /s/ in Figure 2. Namely, they are distributed closer to /tɕ/, and /tɕ*/ than Japanese /s/ on the coordinate plane of the rise and steady+decay durations. As a result of the use of Korean fricatives having this characteristic, Japanese /s/ pronounced by the Korean speakers may shift toward the origin. Meanwhile, such an alternative use for Japanese /ts/ is not available because the Korean language does not have /ts/. In that case, Korean speakers probably transfer a pronunciation manner of Korean lax and tense affricates (i.e., /tɕ/ and /tɕ*/) to pronounce Japanese /ts/ because these Korean affricates have similar acoustic features to /ts/. The Korean aspirated affricate /tɕh/ cannot be used for the transfer because Japanese /ts/ is not aspirated by default. The idea about such transfer is supported by the result that the combined distribution of Korean /tɕ/ and /tɕ*/ in Figure 4 is similar to the distribution of Japanese /ts/ pronounced by the Korean speakers in Figure 3. Furthermore, when calculated with the Japanese speakers’ production boundary, the Korean speakers showed a low affricate→fricative error rate (6.7%) similar to the /ts/→/s/ error rate (6.25%) in Table 4. These results suggest that Korean speakers transfer Korean /tɕ/ and /tɕ*/ to pronounce Japanese /ts/. Since Korean /tɕ/ and /tɕ*/ in Figure 4 intrinsically have closer distributions to the origin than Japanese /ts/ in Figure 2, the Korean speakers’ transfer may result in the shift of /ts/ toward the origin. As described above, Korean speakers pronounce Japanese /s/ and /ts/ with a shift toward the origin. As a consequence, they make more /s/→/ts/ errors and fewer /ts/→/s/ errors. Incidentally, the low /ts/→/s/ errors do not necessarily mean that the Korean speakers’ /ts/ sounds natural. As seen in Figure 3, the Korean speakers’ /ts/ is closer to the origin than the Japanese speakers’ /ts/ in Figure 2. This means that Korean speakers pronounce /ts/ with very short rise and steady+decay durations, which correspond to the acoustic features of short duration in Korean tense affricate /tɕ*/ (Shin, 2015). The /ts/ with different acoustic features might cause an unnatural impression for Japanese speakers even though it is categorized as /ts/. This notion about naturalness should be examined in a future study. This study is significant for instruction in Japanese pronunciation for Korean speakers because it clarifies that most Korean speakers have a common bias in pronouncing /s/ and /ts/. Korean speakers should pronounce /s/ with a longer duration of the rise and steady+ decay parts. Although a /ts/ pronunciation with a short duration of the rise and steady+decay parts does not cause an error to /s/, it may degrade the naturalness and/or intelligibility of /ts/. On this point, Korean speakers also should pronounce /ts/ with a longer duration of the rise and steady+decay parts. By teaching this knowledge to Korean speakers, their pronunciation of /s/ and /ts/ will improve. They will make fewer errors between /s/ and /ts/, and will able to pronounce /s/ and /ts/ more naturally and intelligibly. Since the rise and steady+decay durations are time-domain variables, they probably vary with speaking rate, as would the production boundary defined by these durations. Although participants were asked to pronounce at a normal speaking rate, there might have been some speaking rate variations between participants. In particular, if the Korean speakers tended to pronounce at a faster speaking rate than the Japanese speakers, this might have caused a bias in this study because the faster speaking rate would make the duration of the rise and steady+decay shorter, causing a shift of the /s/ and /ts/ distributions toward the origin. However, this was not the case. Korean speakers pronounced the word items at a significantly slower speaking rate than Japanese speakers as described in Section 3.2. The slower speaking rate makes the duration of the rise and steady+decay longer, and hence it cannot be the cause of the shift of /s/ and /ts/ distribution toward the origin. If Korean speakers pronounce the word items at the same speaking rate as Japanese speakers, the /s/ and /ts/ distribution might shift closer to the origin, which would result in more /s/→/ts/ errors than the current results. However, these notions are based on the assumption that speaking rate affects the rise and steady+decay durations. This assumption is probable, but it has not been confirmed. A future study is necessary to determine the effects of speaking rate on these durations. This study clarified the characteristics of /s/ and /ts/ at a word-initial position, but it did not treat these consonants at a word-medial position. The /s/ and /ts/ at a word-medial position might be easily distinguished because /ts/ has a closure preceding its burst part whereas /s/ does not. However, /ts/ at a word-medial position may have another problem for Korean speakers because Korean lax affricates at a word-medial position may appear as voiced between voiced sounds but not voiceless (Shin, 2015), although the affricates at a word-initial position always appear as voiceless. In other words, since the Korean language does not distinguish voiceless and voiced lax affricates (Ha et al., 2009), Korean speakers may mispronounce the voiceless affricate /ts/ as a voiced affricate at a word-medial position if they mimic /ts/ with a lax affricate /tɕ/. Moreover, if Korean speakers mimic the /ts/ with a tense affricate /tɕ*/, they might mispronounce it as a geminated affricate because the tense affricate /tɕ*/ has a long closure duration that is the main acoustic feature of the Japanese geminate affricate. For these reasons, Korean speakers might have more trouble pronouncing /ts/ at a word-medial position than at a word-initial position. Considering these points, Korean speakers’ /s/ and /ts/ at a word-medial position should be examined in a future study. One might suspect that Korean speakers’ pronunciation errors are not perceived as errors by native Japanese speakers because the errors are identified with acoustical features, not perceptually in this study. Although several studies (e.g., Baese-Berk, 2019; Flege & Bohn, 2021) argued a weak or no correlation between speech production and perception, other studies (e.g., Amano & Hirata, 2010; Amano & Hirata, 2015; Denes & Pinson, 1993) claimed that speech production and perception are closely related and the production and perceptual boundaries of phonemes are expected to coincide. According to the expectation, the coincidence of the production and perceptual boundaries has been actually confirmed by experimental studies. For example, Amano & Hirata (2010; Amano & Hirata 2015) conducted an acoustic analysis and perception experiment for Japanese singleton and geminate stops at various speaking rates, and they demonstrated that production and perceptual boundaries of the stops were represented by almost the same lines on the coordinate plane of closure and subword durations. Based on these results, it is highly probable that the Japanese speakers’ production and perceptual boundaries of /s/ and /ts/ are identical. If the boundaries are identical, the Korean speakers’ errors identified by the production boundary should be the same as the errors identified with the perceptual boundary. Therefore, Korean speakers’ pronunciation errors in this study are almost certainly perceived as errors by native Japanese speakers. If speech production and perception have a close relationship (e.g., Amano & Hirata, 2010; Amano & Hirata, 2010; Amano & Hirata, 2015; Denes & Pinson, 1993), the perception of /s/ and /ts/ may have similar characteristics to their production observed in this study. Namely, Korean speakers might misperceive /ts/ as /s/ more frequently than /s/ as /ts/ as a consequence of a perceptual boundary shift to the origin. This notion should be examined in a future study. However, the examination of the perceptual boundary alone is not enough to clarify the characteristics of perception of /s/ and /ts/. The sensitivity in discriminating these two consonants should also be examined because even if Korean speakers have the same perceptual boundary as Japanese speakers, they can misperceive /s/ and /ts/ because of low sensitivity in discriminating these two consonants. A future study should investigate the relationships between the perception and production of /s/ and /ts/ while paying attention to this point. When assuming the distributions of phoneme category in production (Figures 24) correspond to those in perception, some implications can be provided for PAM-L2 proposed by Best & Tyler (2007). PAM-L2 distinguishes the following four cases for the perception of L1 and L2 phonological categories: 1. Only one L2 phonological category is perceived as equivalent (perceptually assimilated) to a given L1 phonological category, 2. Both L2 phonological categories are perceived as equivalent to the same L1 phonological category, but one is perceived as being more deviant than the other, 3. Both L2 phonological categories are perceived as equivalent to the same L1 phonological category, but as equally good or poor instances of that category, and 4. No L1-L2 phonological assimilation (Best & Tyler, 2007). However, none of these cases fit the current relationship between Korean L1 phonemes and Japanese L2 phonemes. Korean speakers perceptually map one L2 phoneme /s/ to two L1 phonemes /s/ and /s*/, and they also map one L2 phoneme /ts/ to two L1 phonemes /tɕ/ and /tɕ*/. Namely, one L2 phonological category is perceived as equivalent to two L1 phonological categories. This is a new case that PAM-L2 should consider. Perceptual learning of Japanese L2 phonemes would occur as PAM-L2 claims, and the Korean speakers’ boundary would become close to that of Japanese speakers. In that process, two Korean L1 phonemes would affect this perceptual learning. In some sense, the new case is regarded as an extension of case 1 but it is more complicated because there are two L1 phonemes and their interaction is expected. Future investigations are needed to clarify the perception and learning process in the new case, and it would improve PAM-L2. This study showed that Korean speakers pronounce Japanese /s/ and /ts/ with a bias related to fricatives and affricates in the Korean language. Many languages such as English, Spanish, Thai, and Vietnamese do not have /ts/. Thus, /s/ and /ts/ produced by native speakers of these languages might be affected by the phonemes in their language that are similar to /s/ and /ts/, and these effects may differ according to what phonemes are similar to /s/ and /ts/. These points should be examined in a future study. ## Notes * This study was supported by JSPS KAKENHI Grant Numbers JP21530782, JP22320081, JP26370464, and JP17K02705. We would like to thank the National Institute of Informatics, NTT Human Interface Laboratories, and Professor Hyunsoon Kim of Hongik University in Seoul for the assistance of recordings. ## References 1. Amano, S., & Hirata, Y. (2010). Perception and production boundaries between single and geminate stops in Japanese. The Journal of the Acoustical Society of America, 128(4), 2049-2058. 2. Amano, S., & Hirata, Y. (2015). Perception and production of singleton and geminate stops in Japanese: Implications for the theory of acoustic invariance. Phonetica, 72(1), 43-60. 3. Amano, S., & Kondo, T. (1999). Nihongo no goitokusei (Lexical properties of Japanese). Tokyo, Japan: Sanseido. 4. Baese-Berk, M. M. (2019). Interactions between speech perception and production during learning of novel phonemic categories. Attention, Perception, & Psychophysics, 81(4), 981-1005. 5. Best, C. T., & Tyler, M. D. (2007). Nonnative and second-language speech perception. In O. S. Bohn & M. J. Munro (Eds.), Language experience in second language speech learning: In honor of James Emil Flege (pp. 13-34). Amsterdam, Netherlands: John Benjamins. 6. Cho, T., Jun, S. A., & Ladefoged, P. (2002). Acoustic and aerodynamic correlates of Korean stops and fricatives. Journal of Phonetics, 30(2), 193-228. 7. Denes, P. B., & Pinson, E. N. (1993). The speech chain: The physics and biology of spoken language. New York, NY: W.H. Freeman and Company. 8. Flege, J. E., & Bohn, O. S. (2021). The revised speech learning model (SLM-r). In R. Wayland (Ed.), Second language speech learning: Theoretical and empirical progress (pp. 3-83). Cambridge, UK: Cambridge University Press. 9. Ha, S., Johnson, C. J., & Kuehn, D. P. (2009). Characteristics of Korean phonology: Review, tutorial, and case studies of Korean children speaking English. Journal of Communication Disorders, 42(3), 163-179. 10. Kang, K. S. (2000). On Korean fricatives. Speech Sciences, 7(3), 53-68. 11. Kubozono, H. (2015). Introduction to Japanese phonetics and phonology. In H. Kubozono (Ed.), Handbook of Japanese phonetics and phonology (pp. 1-40). Boston, MA: De Gruyter Mouton. 12. Matsuzaki, H. (1999). Phonetic education of Japanese for Korean speakers. Journal of the Phonetic Society of Japan, 3, 26-35. 13. Shin, J. (2015). Vowels and consonants. In L. Brown & J. Yeon (Eds.), The handbook of Korean linguistics (pp. 3-21). Malden, MA: Wiley-Blackwell. 14. Sukegawa, Y. (1993). Utterance tendency of non-native Japanese speakers: Results of questionnaire survey. Japanese speech and education,Research Report of Grant-in-Aid for Scientific Research on Priority Areas by Ministry of Education, Science and Culture, 187-222. 15. Yamakawa, K., & Amano, S. (2015). Discrimination of Japanese fricatives and affricates by production boundaries in time and spectral domains: A case study of a female native speaker. Acoustical Science and Technology, 36(4), 296-301. 16. Yamakawa, K., Amano, S., & Itahashi, S. (2012). Variables to discriminate affricate [ʦ] and fricative [s] at word initial in spoken Japanese words. Acoustical Science and Technology, 33(3), 154-159. 17. Vance, T. J. (2008). The sounds of Japanese. Cambridge, UK: Cambridge University Press. 18. Zimmermann, G. N., Price, P. J., & Ayusawa, T. (1984). The production of English /r/ and /l/ by two Japanese speakers differing in experience with English. Journal of Phonetics, 12(3), 187-193.
# Trying to draw the tautological line bundle ($\subseteq \mathbb{CP}^1\times \mathbb{C}^2$) In order to learn about vector bundles, I would like to draw the tautological vector bundle over the complex projective line $$E = \{(x,v) \in \mathbb{CP}^1 \times \mathbb{C}^2 : v \in x \} .$$ Identifying the complex projective line with the Riemann sphere, $\mathbb{CP}^1 \cong S^2$, I hope that it might be possible to visualize this bundle by attaching small planes to each point of the sphere, similar to how one can visualize the tangent bundle of sphere. In other words, I'm looking for an embedding $E \hookrightarrow S^2 \times \mathbb{R}^3$ into a trivial bundle. (Obviously, $E$ has to be viewed as a 2-dimensional real vector bundle for this to make sense.) I am aware that such a thing might not exist, in which case I would like to learn why. I claim that there is no bundle embedding of the realification of the tautological bundle $\mathcal{O}(-1)$ into the trivial (real) bundle $S^2 \times \mathbb{R}^3$. Suppose there were; then we could take the orthogonal complement $N$, and we would obtain a decomposition $$N \oplus \mathcal{O}(-1)_{\mathbb{R}} = S^2 \times \mathbb{R}^3$$ where $N$ is a real line bundle. But real line bundles on any compact CW complex $X$ are classified by the first Stiefel-Whitney class, which lives in $H^1(X, \mathbb{Z}/2)$ (since the infinite 1-Grassmannian is a $K(\mathbb{Z}/2, 1)$). However, $H^1(S^2, \mathbb{Z}/2)=0$, and so $N$ is trivial. It follows that if such an embedding existed, then $\mathcal{O}(-1)_{\mathbb{R}}$ would be stably trivial. This is, however, not the case. Stable trivialty would imply that the Stiefel-Whitney classes were trivial, as the product formula for them shows. However, we know that the top Chern class in $H^2(\mathbb{CP}^1, \mathbb{Z})$ generates the group, and also that (Proposition 3.8 in Hatcher's Vector Bundles and K-theory, available here) implies that the top Stiefel-Whitney class is the image of the top Chern class. But the image of a generator in $H^2(\mathbb{CP}^1, \mathbb{Z})$ in $H^2(S^2, \mathbb{Z}/2)$ is nonzero. I think you can't do much better than to visualize the Hopf fibration $S^{3} \to S^{2}$, which can be defined by restricting the tautological line bundle to $S^{3} \subset \mathbb{C}^{2}$ (in other words, the fiber over each point $x \in \mathbb{CP}^{1}$ is the circle of points of length $1$ in $x$). Most of the pictures are obtained by stereographically projecting $S^{3}\smallsetminus \{N\}$ to $\mathbb{R}^{3}$. Just google for Hopf fibration and you'll get lots of nice pictures. I do not think this is possible. But you can try to do this over $\mathbb{S}^{2}\times \mathbb{R}^{4}$. The most basic reason for this to be impossible is the 'complex twist' involved in the tautological bundle made its rigid enough that become impossible to put into $\mathbb{S}^{3}$ (which is too tight). I doubt if one need to work with characteristic classes back and forth, this feels killing a mosquito with a big hammer.
## How To Find Transformation Matrix How to find rotation matrix from latitude and longitude Nevertheless, the method to find the components remains the same. The latter is obtained by expanding the corresponding linear transformation matrix by one row and column, filling the extra space with zeros except for the lower-right corner, which must be set to 1. For example, the counter-clockwise rotation matrix from above becomes: [⁡ − ⁡ ⁡ ⁡] Using transformation matrices... To find out which transformation a matrix represents, it is useful to use the unit square. The unit square is a square with vertices (0, 0), (1, 0), (1, 1) and (0, 1). The unit square is drawn and the image of each vertex of the square is calculated by matrix multiplication. math How to calculate transformation matrix - Stack Overflow Now, in the past, if we wanted to find the transformation matrix-- we know this is a linear transformation. I don't have to go through all of the exercise of this is a linear transformation. But in the past, if we wanted to find the transformation matrix for a linear transformation, let's say we say T of x is equal to some 2-by-2 matrix…... Nevertheless, the method to find the components remains the same. The latter is obtained by expanding the corresponding linear transformation matrix by one row and column, filling the extra space with zeros except for the lower-right corner, which must be set to 1. For example, the counter-clockwise rotation matrix from above becomes: [⁡ − ⁡ ⁡ ⁡] Using transformation matrices image processing How to find transformation matrix by So, given the linear transformation f, we now know how to assemble a matrix A such that f(x) = Ax: And of course the converse holds: given a matrix A m n , the function f how to convince a depressed spouse to get help Welcome to the math homework help subreddit. This subreddit is mainly for getting help with math homework. However, math questions of all kinds are welcome.. How to find out your school district ## How To Find Transformation Matrix ### Transformations Matrices and Computer Animation • How to find the transformation matrix of a linear • Finding a transformation between two matrices Physics Forums • robotic arm Find orientation through Transformation • image processing How to find transformation matrix by ## How To Find Transformation Matrix ### 23/04/2018 · In this video I will show how to find 2x2 transformation matrix. • If you only have the view matrix $$\mathbf{V}$$ and you need to find a camera transformation matrix $$\mathbf{M}$$ that can be used to position a visual representation of the camera in the scene, you can simply take the inverse of the view matrix. • In this matrix, a, b, c, and d are different values which will affect the shape of what we want to transform. The variables tx and ty are the values by which the x and y (horizontal and vertical respectively) coordinates will be moved. • We verify that given vectors are eigenvectors of a linear transformation T and find matrix representation of T with respect to the basis of these eigenvectors. • I am studying the elastic properties of trigonal system for which I need the transformation matrix. I have used the matrix in the ElaStic paper but I have not got any result. ### You can find us here: • Australian Capital Territory: Corinna ACT, Latham ACT, Blakney Creek ACT, Griffith ACT, Blakney Creek ACT, ACT Australia 2646 • New South Wales: Tyagarah NSW, Liberty Grove NSW, Clovelly NSW, Parkesbourne NSW, Denistone West NSW, NSW Australia 2065 • Northern Territory: Moil NT, Minjilang NT, Wurrumiyanga NT, Bellamack NT, Titjikala NT, Gunbalanya NT, NT Australia 0899 • Queensland: Park Ridge QLD, Innisfail Estate QLD, Port Alma QLD, Neurum QLD, QLD Australia 4015 • South Australia: Leighton SA, Hatherleigh SA, Ninnes SA, Templers SA, Kyeema SA, Mitcham SA, SA Australia 5098 • Tasmania: Karanja TAS, Orielton TAS, Glengarry TAS, TAS Australia 7021 • Victoria: Blairgowrie VIC, Parkville VIC, Thoona VIC, Connewirricoo VIC, Kirwans Bridge VIC, VIC Australia 3001 • Western Australia: Parklands WA, Prevelly WA, Coolum Beach WA, WA Australia 6058 • British Columbia: Osoyoos BC, Princeton BC, Fraser Lake BC, Salmo BC, Fort St. John BC, BC Canada, V8W 6W1 • Yukon: Takhini Hot Springs YT, Jakes Corner YT, Whitestone Village YT, Kynocks YT, Upper Liard YT, YT Canada, Y1A 4C5 • Alberta: Delburne AB, Redcliff AB, Champion AB, Gibbons AB, Magrath AB, Andrew AB, AB Canada, T5K 7J5 • Northwest Territories: Whati NT, Tsiigehtchic NT, Ulukhaktok NT, Wrigley NT, NT Canada, X1A 7L1 • Saskatchewan: Debden SK, Yarbo SK, Luseland SK, Lloydminster SK, Tribune SK, Viscount SK, SK Canada, S4P 5C4 • Manitoba: Plum Coulee MB, Oak Lake MB, St. Claude MB, MB Canada, R3B 1P4 • Quebec: Lachute QC, Mercier QC, Lac-Poulin QC, Bromont QC, Cap-Sante QC, QC Canada, H2Y 2W9 • New Brunswick: Blackville NB, Riverview NB, Le Goulet NB, NB Canada, E3B 3H1 • Nova Scotia: Trenton NS, Cape Breton NS, Louisbourg NS, NS Canada, B3J 1S1 • Prince Edward Island: Central Kings PE, Tignish PE, Warren Grove PE, PE Canada, C1A 1N3 • Newfoundland and Labrador: Northern Arm NL, Bishop's Falls NL, St. Joseph's NL, Botwood NL, NL Canada, A1B 4J8 • Ontario: Ojibway Island ON, Bethel, Prince Edward ON, Southgate ON, Espanola, Oliver, Essex County ON, St. Albert ON, Shamrock ON, ON Canada, M7A 1L2
NeurIPS 2019 Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center Paper ID: 6576 Bootstrapping Upper Confidence Bound ### Reviewer 1 1. It is claimed that existing concentration-based confidence bounds are typically data-independent. This is true for the UCB1 algorithm, but there are other more sophisticated algorithms that exploit the full distribution in their confidence bound. For the example, for general distribution that have support in [0,1] the empirical KL-UCB of [Cappé et al., Kullback-Leibler Upper Confidence Bounds for Sequential Resource Allocation, 2013] used empirical likelihood to build confidence intervals, that are not at all of the form \bar y_n + data-independent terms. For Bernoulli bandits, more simple confidence intervals are proposed in the same papers (that extend to sub-Bernoulli, i.e. bounded, distributions). So I think it would be fair that those more sophisticated algorithms are also included in the comparison. Also, the analysis of UCB1 as proposed by [10] has been improved by several authors (using notably self-normalized deviation inequalities instead of union bounds) for sub-Gaussian distribution to show that the index \bar y_{n_k,t} + \sqrt{2\sigma^2log(t)/n_{k,t}} for sigma^2 sub-Gaussian distributions can be used (in place of the \sqrt{2log(t)/(n_{k,t})} originally proposed for 1/4-subGaussian). A fair comparison should include all the improvements from the literature. The experimental setup is also not clearly related to the theoretical guarantees that are obtained: while Theorem 2 hold for any fixed problem instance ("a" stochastic K-armed bandit, frequentist statement), it seems that the regret curves are obtained by averaging several runs on different randomly generated instances (Bayesian evaluation). Maybe I misunderstood something, but if the arms are fixed for good instead of being randomly generated in each run, one could as well provide their value. A Bayesian evaluation is interesting too to access the robustness of the algorithm on different problems, but given the nature of the theoretical results obtain, I think one or two "frequentist" regret curves are mandatory. In the linear bandit part, it seems the dimension d under which the experiments were run is not specified in Section 4.2. 2. The complexity of the algorithm is not discussed in details, it is just written in the introduction that it is "easy to implement". I should be acknowledged that it is significantly more complex that UCB1 for example. Indeed at each time step B bootstrap repetitions are needed to estimated the bootstrapped quantiles, and each of them require to drawn n_k random variables for each arm k (the values of w's). Also, this requires to store the past rewards obtained on all arms, which requires a lot a memory. This constraint is also needed for the empirical KL-UCB mentioned above, which is one more reason to compare the two algorithms that have similar complexity. From Theorem 2, I guess that the w's are Rademacher random variables, but it would be good to specify this in the statement of the algorithm. Bootstrapped UCB has two hyper-parameters, B and delta. Some insight on the parameter delta would be much appreciated. The tuning of the two parameters is never justified. We get that the larger B the better the algorithm and the more complex, but why B=200 specifically? Regarding delta, it is arbitrarily set to delta=0.1 in Section 4.1 and then to delta_t = 1/(1+t) for linear bandits "to be fair". I don't get why this is fair. Regarding the parameter alpha, I would like to mention that it is set to alpha=1/(t+1) in each round in the statement of Algorithm 1 (and I guess, the algorithm was implemented with this choice), however regret guarantees are only obtained by a fixed choice alpha = 1/T^2 where T is the full horizon. This discrepancy is annoying. 3. I checked the proofs of Theorem 2.2 and Theorem 3.2, which are the most important results of the paper. Note that the paper would be interesting even without the habillity to generalize to sub-Weibul distributions (not that actually, all experiments feature sub-Gaussian distributions, so there is not a strong case for this generalization. As such, it should be precised which function \phi is employed in the experiments. If beta=2 I would peferr to employ directly (2log(1/alpha)/n)^{1/2}), but I couldn't figure out what was done. I'm essentially OK with the proof of Theorem 3.2, though I didn't check too carefully the sub-Weibull tricks. I noted two typos in Equation (B.18) : u_1 should be \mu_1 twice. Also the notation \bar y_s is not super-precise as it sometimes refer to s i.i.d. samples from arm 1 or from arm k : I would introduce \bar y_{k,s} to avoid this aliasing. In the proof of Theorem 2.2, I have a hard time to understand where Equation (B.2) comes from, so I think detailed explanations are needed here. By definition I get that $\Pr(\bar y_n - \mu > q_\alpha(y_n - \mu) = \Pr_y(\Pr_w(1/n \sum_{i=1}^n w_i(y_i - \mu) > \bar y_n - \mu ) \leq alpha)$, but the formula in (B.2) seem to have inverted the integration over y and w in a way I don't understand. Also, the notation q_\alpha(z) for any vector z is not really defined, only q_\alpha(y_n - \mu) is defined in the paper: a more general notation should be introduced. The second problem I saw was on top on page 13, where some conditioning on event E is brutally removed: in the first inequality there should be a \bP(\bar y_n - \mu > q_\alpha(y_n - \mu) | E) + P() instead of the same thing without the conditioning. And the distribution of \bar y_n conditioned on the fact that y_n satisfy some condition is not necessarily the same as without the condition. ### Reviewer 2 The paper is clear and well-written. I believe the main result Theorem 2.1 is novel. But I have the following concerns. (1) The results depends on the symmetry of the rewards. This is a huge assumption, which does not hold for many applications, including the Bernoulli bandits and many real-world problems with highly-skewed rewards. I do not take this as a downside of this paper, but this should be explicitly clarified in the abstract and the introduction to avoid overclaims. (2) The function \phi(y_n) is still needed in Theorem 2.2 as an exact concentration bound for \bar{y}_n - \mu. This is only possible in previously studied cases such as bounded rewards or Gaussian rewards. Admittedly this can also be extended to sub-gaussian or sub-Weibull rewards with known Orlicz norm, but the Orlicz norm is arguably never known in practice for potentially unbounded rewards. So from my point of view, the bootstrap UCB improves but does not extend the regime for the regret guarantees. Again I do not think this is a downside and I think the improvement is interesting, but this point should be made explicit at the beginning of the paper to avoid overclaims. (3) When comparing Bootstrap UCB with vanilla UCB, how do you set the alpha for both? Given the parameter \alpha, the confidence level for vanilla UCB is \alpha, while that in your theory (Theorem 2.1) is 2\alpha. For fair comparison, if you take \delta = 0.5, the equation (2.6) should be set as q_{\alpha / 4}(y_n - \bar{y}_n) + 2\log (8 / \alpha) / n. (4) In Theorem 3.2, \alpha_t is set to be 1/T^2 but in implementation it is set to be 1/(t+1). Would it be possible to analyze the latter as well? (5) The authors claim the sub-Weibull variables as "heavy-tailed" (e.g. the second line of Section 3.2). I do not think this is what people call "heavy-tailed". It usually means variables with only finite moments. The random variables with Weibull tail is light-tailed. (6) There are lots of missing references for the multiplier bootstrap. It can be dated back to Rubin (1981), which was called Bayesian bootstrap initially. Later it was studied and developed by Wu (1986), Liu (1988), Mason and Newton (1992), Rao and Zhao (1992), Mammen (1993), Chatterjee (1999), just to name a few. A relatively thorough literature review is important for a high-quality paper. References D. B. Rubin. The bayesian bootstrap. The annals of statistics, pages 130–134, 1981. C.-F. J. Wu. Jackknife, bootstrap and other resampling methods in regression analysis. the Annals of Statistics, 14(4):1261–1295, 1986. R. Y. Liu. Bootstrap procedures under some non-iid models. The Annals of Statistics, 16(4):1696–1708, 1988. Mason, David M., and Michael A. Newton. "A rank statistics approach to the consistency of a general bootstrap." The Annals of Statistics 20.3 (1992): 1611-1624. C. R. Rao and L. Zhao. Approximation to the distribution of M-estimates in linear models by randomly weighted bootstrap. Sankhya: The Indian Journal of Statistics, Series A, pages 323–331, 1992. E. Mammen. Bootstrap and wild bootstrap for high dimensional linear models. The annals of statistics, 21(1):255–285, 1993. S. B. Chatterjee. Generalised bootstrap techniques. PhD thesis, Indian Statistical Institute, Kolkata, 1999. ### Reviewer 3 In this paper, the authors propose a novel point of view on a very well-known algorithm: UCB. Rather than using worst case concentration inequalities, which only exploit the tail information, the authors take advantage of the multiplayer bootstrap to provide a non-parametric data dependent UCB. The multiplayer bootstrap consists in approximating the quantile q_\alpha by reweighting the data with random multipliers independent of the data. Theorem 2.2 provides a significant result by controlling non-asymptotically the bootstrapped quantile. Indeed rather than using a worst case concentration inequality which leads to a data independent UCB, the control of the quantile allows to build a data dependent UCB. Bootstrapped UCB (algorithm 1) uses a Monte Carlo approach to approximate the bootstrapped quantile. The second significant analytical result is a concentration inequality for sub-Weibull Distribution, which is more general than sub-Gaussian distribution. Theorem 3.1 allows extending Bootstrapped UCB (and a lot of bandit algorithms) to sub-Weibull Distribution. Finally Theorem 3.3 states problem dependent and problem independent upper bounds of regret for Bootstrapped UCB. Experimental results show that Bootstrapped UCB outperforms UCB1, while it is more robust against a wrong prior than TS. This is a well-written paper which contains significant results for the bandit community. Nevertheless, I was disappointed by the fact that the control of the approximation of the bootstrapped quantile q^B_\alpha with respect to q_\alpha is not done. Is-it a big deal ? I understand that this control depends only on the parameter B, and hence on the computational cost. However sometimes, we cannot consider that the computational cost is not an issue, for instance for IoT. ______________________________________________________________ The authors answered to my concern. The obtained algorithm is still computationally expensive. However I think that the approach is original and could open research avenue for bandit community. I recommend acceptation.
Abstract Nozzle facilities, which can generate high Mach number flows, are the core portions of the supersonic wind tunnel. Different from traditional fixed nozzles, a flexible nozzle can deform to designed contours and supply steady core flows in several Mach numbers. Due to the high-quality demands from the thermo-aerodynamic testing, the deformation of the flexible nozzle plate should be carefully designed. This problem is usually converted into the large deformation problem of a cantilever with movable hinge boundary conditions. In this paper, a generalized variational method is established to analyze the deformation behavior of the flexible nozzle. By introducing axial deformation constraint and Lagrange multiplier, an analytical model is derived to predict the deformed morphology of the flexible plate. Finite element analyses (FEA) of a single-jack flexible nozzle model is performed to examine the predicted deformations and reaction forces. Furthermore, the large deformation experiments of an elastic cantilever with a movable hinge connection are carried out to simulate the scenarios in supersonic flexible nozzle facility. Both the FEA and experimental results show high accuracy of current theoretical model in deformation predictions. This method can also serve as a general approach in the design of flexible mechanisms with movable boundaries. 1 Introduction Wind tunnels [1] are the facilities to supply aeronautical environments for thermo-aerodynamic testing. As a kernel portion in wind tunnels, the nozzle component [2] transforms the flow from reservoir condition to testing condition with a designed Mach number. In traditional fixed nozzle facilities, the contours of the nozzle plates are fixed, which means it can only generate a stream with a fixed Mach number. Therefore, the nozzles should be changed frequently according to the demands of various Mach number flows from aircraft testing. The fixed-flexible and full flexible nozzles [3,4] are developed to break these limits, where the flow Mach number is adjustable in one single nozzle. There are two common methods to achieve this purpose, i.e., flexible-wall [5,6] and asymmetric-sliding block nozzle [7,8] designs. The former adjusts the flow Mach numbers via the opposite nozzle plates symmetrically deforming to the designed profile curve and the latter only manipulates the plate at one side. The simplest flexible nozzle is the single-jack nozzle [4,6,9] as shown in Fig. 1(a). By geometry symmetry, this system can be simplified as an elastic cantilever hinged to a retractable rigid rod (jack), whose opposite end is hinged to a fixed point. During operations, the elongation or shortening of the jack leads to plate deformation which leads to jack rotation synchronously. At the design stage, it is important to establish the relationship between the rod elongation and the deformed plate contour. This mechanical model can be treated as a large deformation beam with a movable hinge boundary. Fig. 1 Fig. 1 Close modal At present, the common approach in designing flexible nozzle is finite element analyses (FEA) [1012], which cost a large amount of time for geometry and motion modeling. In the aspect of theories, most of the existing models are derived from empirical equations and engineering simplifications, which are not suitable for the scenarios of large deformation [5,13,14]. For large deformation beam problems, the optional approaches include small parameters perturbation method [15,16], elliptic integral method [1719], and shooting method [2023]. Actually, all these methods involve solving the deformation process during jack motions. In this paper, a generalized variational principle is established, which can calculate the beam deflection and boundary variation without complex iterations. In Sec. 2 the basic equations and their extensions for cantilever beam hinged to skew rod are derived. Then, the theoretical model is verified by FEA and experiments in Secs. 3 and 4, respectively. Finally, the work is concluded in the last section. 2 Analytical Model For geometry symmetry, the single-jack flexible nozzle facility is simplified as an elastic cantilever hinged to a rigid jack at the right end as shown in Figs. 1(b)1(d). The jack mechanism contains translational and rotational degrees-of-freedom. In the simplest case, the jack is originally perpendicular to the beam axis in the Cartesian coordinate (x, y), as shown in Fig. 1(b). Here, we introduce an assumption that there is no axial deformation for the beam structure, which means its length remains unchanged during deformation. Hence, we can establish the potential energy functional with length restriction as $Π=∫0x¯12EI[w″[1+(w′)2]32]2dx+λ(∫0x¯1+(w′)2dx−l)$ (1) where the first term is the bending deformation energy with considerations of large deformation, and the second one represents the constraint condition for beam inextensibility. Since the hinge point is treated as a displacement boundary, according to the variational principle, the work done by the jack mechanism should not be included in the energy functional, which is also the key point in the current method. EI represents the bending stiffness, w denotes the deflection function of the beam, $x¯$ is the horizontal coordinate of the hinge point after deformation, the superscript “ʹ” is a derivative symbol standing for “d/dx”, and λ denotes the Lagrange multiplier. Based on the principle of minimization potential energy, the first-order variation of Π equals to zero, which leads to $3w′(w″)2[1+(w′)2]4−w″′[1+(w′)2]3+λEIw′1+(w′)2=0$ (2) $∫0x¯1+(w′)2dx−l=0$ (3) From Eq. (2), the third-order derivative of w with respect to x can be expressed by its first and second orders as $w′′′=3w′(w″)21+(w′)2+λEIw′[1+(w′)2]52$ (4) Apparently, the fourth and higher-order derivatives of w can also be expressed in the similar way. Hereby, the general solution to Eq. (4) can be expressed in the form of the Taylor series as $w=w(x0)+w′(x0)(x−x0)+w″(x0)2!(x−x0)2+w″′(x0)3!(x−x0)3+w(4)(x0)4!(x−x0)4+o(x5)$ (5) By setting the expansion point x0 at the original point, the fourth-order derivative of w can be obtained by Eq. (4), $w(4)=3(w″)3[1+5(w′)2][1+(w′)2]2+λw″[1+(w′)2]12EI(12(w′)4+13(w′)2+1)$ (6) Considering the fixed boundary condition at the left end, the deflection function can be written as $w=w″(0)2x2+124[3w″(0)3+λEIw″(0)]x4$ (7) Set the coordinate of the right end as ($x¯$, $y¯$) after deformation, and we have $w(x¯)=y¯$ (8) The hinged boundary condition at the right end leads to $w″(x¯)=0$ (9) Since the beam is hinged to the rigid jack, the coordinates of its right end can also be expressed in the following terms ${x¯=l−(S0+ΔS)sinθy¯=(S0+ΔS)cosθ−S0$ (10) where S0 and ΔS denote the original length and elongation of the jack, respectively and θ is the rotation angle of the jack. Simultaneous Eqs. (3) and (8)(10), the displacement function, the coordinates of the right endpoint, and the rotation angle of the jack are obtained. For the case of non-vertical jack, only Eq. (10) needs to be rewritten. For the model in Figs. 1(c) and 1(d), Eq. (10) changes to ${x¯=l−[(S0+ΔS)sin(θ0+θ)−S0sinθ0]y¯=(S0+ΔS)cos(θ0+θ)−S0cosθ0$ (11) where θ0 is the initial inclination angle of the jack. 3 Finite Element Analyses 3.1 The Cantilever Model With One Perpendicular Jack. In this section, the commercial software abaqus is employed to simulate the models in Fig. 1. A four-node shell element with reduced integration is used to model the cantilever beam. The translator, a type of connector element in abaqus, is selected to model the jack, which can realize the translation and rotation movements of the rigid body. The length ratio of the jack to beam is defined as $α=S0l$ (12) The applied loading on the jack is defined as the length variation versus its original value $εapp=ΔSS0×100%$ (13) Introducing an angle variable φ to denote the rotation angle at the right end of the beam $φ=arctan(w′|x=x¯)$ (14) The moment at the left end of the beam can be expressed as $M0=−EIw″[1+(w′)2]32|x=0$ (15) According to the equilibrium equation, M0 can also be expressed as $M0=−P[x¯cos(θ0+θ)+y¯sin(θ0+θ)]$ (16) The expression of reaction force P from jack can be given by Eqs. (15) and (16) as $P=EIw″[1+(w′)2]32|x=0⋅1[x¯cos(θ0+θ)+y¯sin(θ0+θ)]$ (17) The mechanical and geometric parameters used in the finite element model are listed in Table 1. E and ν represent Young’s modulus and Poisson's ratio, respectively. Two representative geometry configurations (α = 0.5 and 0.75) and three tensile loads (ɛapp = 30%, 40%, and 50%) are modeled, respectively. All the comparisons of deformations predicted by analytical analyses (ANA) and FEA are demonstrated in Figs. 2(a) and 2(b). Similarly, the deformations corresponding to compressive loads of −30%, −40%, and −50% are shown in Figs. 2(c) and 2(d). The results demonstrate that the current analytical model can predict the deformed configurations with high accuracy. And it has the ability to describe the boundary movements as well as the jack rotation at once. Fig. 2 Fig. 2 Close modal Table 1 Mechanical and geometric parameters of one-jack cantilever E/GPaυl/mmb/mmh/mm 2.50.3150202 E/GPaυl/mmb/mmh/mm 2.50.3150202 In addition to the overall deformation, several important parameters are also compared in the case of α = 0.75. Figures 3(a)3(c) show the reaction force (P), the rotation angle (θ) of the jack, and the tangent angle (φ) at the right end of the beam in several tensile loads. It shows that the forces predicted by the analytical model are smaller than the results from FEA. Generally, the stiffness matrix derived in FEA is larger than that from elastic theory, which leads to a larger reaction force. Similarly, for compressive loads, the corresponding P, θ, and φ are also compared in Figs. 3(d)3(f), receptivity. The minus sign before ɛapp denotes the shortening (compressive) of the jack. It is apparently that the rotation angle of the jack is nonlinear with the applied loading, and its change rate accelerates with the growth of jack elongation (or shortening). But in contrast, the tangent angle at the right end of the beam varies linearly with the jack elongation (or shortening). Fig. 3 Fig. 3 Close modal 3.2 The Model With One Slant Jack. The cantilever hinged to a slant jack is also constructed by FEA to further verify the analytical model. The initial inclination angle θ0 and the length ratio α are set as 30 deg and 0.75, respectively. Other parameters remain the same as those in Sec. 3.1. The beam deflections with applied loads of 30%, 40%, and 50% are depicted in Fig. 4(a), respectively. The straight lines with triangle and circle signs represent the results from ANA and FEA, respectively. The results from FEA and theoretical models show good agreements with each other. With the increasing of applied loads, the force predicted by the analytic model and further deviates from that of FEA, as shown in Fig. 4(b). Different from the situation in perpendicular jack, the accumulation errors in rotation angle (Fig. 4(c)) dominate the calculation errors in reaction force, which beat the influence of stiffness matrix deviation. Due to the initial slope of the jack, it first rotates clockwise and then changed to counterclockwise with the increasing of applied load as shown in Fig. 4(c). Similarly, the tangent angle of the beam end increases approximately linear with the applied load as shown in Fig. 4(d). A similar case with the initial inclination angle of −30 deg is also modeled. The predictions of beam deflection, jack reaction force, jack rotation angle, and the tangent angle of the beam end are compared with those from ANA in Fig 5, respectively. It is notable that, the jack rotation angle is approximately linear with the increasing of the applied load, which is different from the previous two models. The selection of the jack initial angle not only affects the rotation direction, but also dominates the change rate of the angle-load curve. Fig. 4 Fig. 4 Close modal Fig. 5 Fig. 5 Close modal 4 Experiments Several simple experiments are performed on a biaxial tensile test platform to further verify the analytical model as shown in Fig. 6(a). The beam sample was manufactured with thermoplastic polylactic acid by three-dimensional (3D) printing technology, whose length (l), width (b), and thickness (h) are 150 mm, 20 mm, and 1 mm respectively. An extra length of 50 mm was spared for clamped portion, and it was clamped by two permanent magnets at one end. A rigid rod was assembled with one of its ends hinged to the beam and the other one going through a shaft sleeve, which can rotate freely at a fixed point. By pushing or pulling the rod, it will rotate with the deformed beam, which can stimulate the mechanism of the single-jack nozzle. All experiment results were photographed by a digital camera. Fig. 6 Fig. 6 Close modal The deflections of the deformed beam were obtained by the tracing point method. Figures 6(b)6(e) show the deflection results from the analytical model and experiments in circle symbols and solid lines, respectively. Figures 6(b) and 6(c) demonstrate the case of θ0 set as 0 and a displacement load of 50% and −50%, respectively. The cases with initial inclination angles of 30 deg and −30 deg are shown in Figs. 6(d) and 6(e), respectively. After removal of the loads, the beam recovered to its initial state, which indicated the beam in elastic range during experiments. As seen from the above results, the theoretical and experimental results are basically identical, which demonstrates that the proposed method has a high accuracy to predict the deformation of the single-jack cantilever beam structure. 5 Conclusions In the current work, we proposed a generalized vibrational method to solve the deformation problem of the single-jack flexible nozzle structure, which can be extended to deal with the large deformation beam problem with a movable hinge boundary. The FEA and experimental validations reveal the high accuracy and feasibility of the current method. It also provides a basis for solving the problem of multi-jack flexible nozzle structure and can serve as guidelines for the design of wind tunnels. Acknowledgment This research was funded by the National Natural Science Foundation of China (NSFC) (Grant No. 12072150). This work is also supported by the Joint Fund of Advanced Aerospace Manufacturing Technology Research (Grant No. U1937601), the Research Fund of State Key Laboratory of Mechanics and Control of Mechanical Structures (Nanjing University of Aeronautics and Astronautics, Grant No. MCMS-I-0221Y01), and the National Natural Science Foundation of China for Creative Research Groups (Grant No. 51921003). Conflict of Interest There are no conflicts of interest. Data Availability Statement The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request. Nomenclature • l = length of the beam • • w = deflection function • • P = reaction force • • M0 = bending moment at the left end of the beam • • S0 = initial length of the jack • • EI = bending stiffness • • α = the length ratio of jack to beam • • ΔS = variation length of the jack • • ɛapp = • • θ = rotation angle of the jack • • θ0 = initial inclination angle of the jack • • λ = Lagrange multiplier • • φ = tangent angle at the right end of the beam References 1. Atieh , A. , Al Shariff , S. , and Ahmed , N. , 2016 , “ Novel Wind Tunnel ,” Sustainable Cities Soc. , 25 , pp. 102 107 . 2. Murugappan , S. , Gutmark , E. J. , Lakhamraju , R. R. , and Khosla , S. , 2008 , “ Flow-Structure Interaction Effects on a Jet Emanating From a Flexible Nozzle ,” Phys. Fluids , 20 ( 11 ), p. 117105 . 3. Erdmann , S. F. , 1971 , “ A New Economic Flexible Nozzle for Supersonic Wind Tunnels ,” J. Aircr. , 8 ( 1 ), pp. 58 60 . 4. Rom , J. , and Etsion , I. , 1972 , “ Improved Flexible Supersonic Wind-Tunnel Nozzle Operated by a Single Jack ,” AIAA J. , 10 ( 12 ), pp. 1697 1699 . 5. Rosen , J. , 1955 , “ The Design and Calibration of a Variable Mach Number Nozzle ,” Int. J. Aeronaut. Space Sci. , 22 ( 7 ), pp. 484 490 . 6. Lv , Z. , Xu , J. , Wu , F. , Chen , P. , and Wang , J. , 2018 , “ Design of a Variable Mach Number Wind Tunnel Nozzle Operated by a Single Jack ,” Aerosp. Sci. Technol. , 77 , pp. 299 305 . 7. Winarto , H. , and Stalker , R. J. , 1984 , “ Design Parameters and Performance of Two-Dimensional, Asymmetric, ‘Sliding Block’, Variable Mach Number, Supersonic Nozzles ,” Aeronaut. J. , 88 ( 876 ), pp. 270 280 . 8. Liepman , H. P. , 1955 , “ An Analytic Method for the Design of Two-Dimensional Asymmetric Nozzles ,” Int. J. Aeronaut. Space Sci. , 22 ( 10 ), pp. 701 709 . 9. Chen , P. , Wu , F. , Xu , J. , Feng , X. , and Yang , Q. , 2016 , “ Design and Implementation of Rigid-Flexible Coupling for a Half-Flexible Single Jack Nozzle ,” Chin. J. Aeronaut. , 29 ( 6 ), pp. 1477 1483 . 10. Sun , S. , Zhang , H. , Cheng , K. , and Wu , Y. , 2007 , “ The Full Flowpath Analysis of a Hypersonic Vehicle ,” Chin. J. Aeronaut. , 20 ( 5 ), pp. 385 393 . 11. Yu , C. , Chen , Z. , and Nie , X. , 2012 , “ Multi-Jack Single-Drive Semi-Flexible Nozzle Mechanism Design and Simulation ,” 2nd International Conference on Frontiers of Manufacturing Science and Measuring Technology (ICFMM 2012) , Xi'an, China , June 12–13 , Trans Tech Publications Ltd., Vol. 503–504, pp. 892 895 . 12. Jiao , X. , Chang , J. , Wang , Z. , and Yu , D. , 2017 , “ Numerical Study on Hypersonic Nozzle-Inlet Starting Characteristics in a Shock Tunnel ,” Acta Astronaut. , 130 , pp. 167 179 . 13. Guo , S. G. , Wang , Z. G. , and Zhao , Y. X. , 2015 , “ Design of a Continuously Variable Mach-Number Nozzle ,” J. Cent. South Univ. , 22 ( 2 ), pp. 522 528 . 14. Yang , Y. , Wen , C. , Wang , S. L. , Feng , Y. Q. , and Witt , P. , 2014 , “ The Swirling Flow Structure in Supersonic Separators for Natural Gas Dehydration ,” , 4 ( 95 ), pp. 52967 52972 . 15. Su , Y. , Wu , J. , Fan , Z. , Hwang , K.-C. , Song , J. , Huang , Y. , and Rogers , J. A. , 2012 , “ Postbuckling Analysis and Its Application to Stretchable Electronics ,” J. Mech. Phys. Solids , 60 ( 3 ), pp. 487 508 . 16. Chien , W. Z. , 2002 , “ Second Order Approximation Solution of Nonlinear Large Deflection Problems of Yongjiang Railway Bridge in Ningbo ,” Appl. Math. Mech. (Engl. Ed.) , 23 ( 5 ), pp. 493 506 . 17. Shoup , T. E. , and Mclarnan , C. W. , 1971 , “ On the Use of the Undulating Elastica for the Analysis of Flexible Link Mechanisms ,” J. Eng. Ind. , 93 ( 1 ), pp. 263 267 . 18. Liu , H. G. , Bian , K. , and Xiong , K. , 2019 , “ Large Nonlinear Deflection Behavior of IPMC Actuators Analyzed With an Electromechanical Model ,” Acta Mech. Sin. , 35 ( 5 ), pp. 992 1000 . 19. Xu , K. , Liu , H. , and Xiao , J. , 2021 , “ Static Deflection Modeling of Combined Flexible Beams Using Elliptic Integral Solution ,” Int. J. Non-Linear Mech. , 129 , p. 103637 . 20. Li , Z. , Yu , C. , Qi , L. , Xing , S. , Shi , Y. , and Gao , C. , 2022 , “ Mechanical Behaviors of the Origami-Inspired Horseshoe-Shaped Solar Arrays ,” Micromachines , 13 ( 5 ), p. 732 . 21. Wang , C. M. , and Kitipornchai , S. , 1992 , “ Shooting Optimization Technique for Large Deflection Analysis of Structural Members ,” Eng. Struct. , 14 ( 4 ), pp. 231 240 . 22. Nallathambi , A. K. , Rao , C. L. , and Srinivasan , S. M. , 2010 , “ Large Deflection of Constant Curvature Cantilever Beam Under Follower Load ,” Int. J. Mech. Sci. , 52 ( 3 ), pp. 440 445 . 23. Abdalla , H. M. A. , and Casagrande , D. , 2020 , “ On the Longest Reach Problem in Large Deflection Elastic Rods ,” Int. J. Non-Linear Mech. , 119 ( 1 ), p. 103310 .
## 1.4 Keeping Track of Decimal Places Consider the numbers 3.7, 37, and 370. Using our rules for logarithms, we see that each power of ten adds “one” to the value of the logarithm: $\begin{eqnarray*} \log 3.7 &=& \log (3.7\times 1) &=& \log 3.7 + \log 1 &=& \log 3.7 + 0 \\ \log 37 &=& \log (3.7\times 10) &=& \log 3.7 + \log 10 &=& \log 3.7 + 1 \\ \log 370 &=& \log (3.7\times 100) &=& \log 3.7 + \log 100 &=& \log 3.7 + 2 \\ \end{eqnarray*}$ We can see a similar pattern when looking back at our table of logarithms above. For example, notice that $$\log 6$$ = 0.7782, while $$\log 60$$ = 1.7782. By keeping track of powers of 10, one only needs the values of the logarithms for numbers between 1 and 10 to be able to determine the logarithm of any other number outside that range. As an illustration, recall the plot we made earlier using Base 2. The pattern between each integer value of $$p$$ repeats; the corresponding point in each repeat section is just “2” times larger than the point in the previous section of the plot: Hence, if we want to know the value of $$2^{7.25}$$, we only need to know the value of $$2^{0.25}$$ and then multiply this answer by $$2^7$$ (= 128). We can use any number as our base, and the same general rules will apply; for our Base 10 system of common logarithms, multiplying by factors of 10 simply means adding zeroes or moving decimal points. Hence, by keeping track of powers of ten, detailed tables of common logarithms of numbers between 1 and 10, which themselves will have values between 0 and 1, are sufficient to perform a fairly accurate general multiplication or division calculation, as we shall soon see. For example, the table below gives the 3-place logarithms for numbers between 1 and 10 in increments of 0.1. For finer increments and for further accuracy of the logarithms, tables can take many pages of text. This led to the publication of books of significant length that contained detailed tables of logarithms. x Log(x) x Log(x) x Log(x) x Log(x) 1 0 3.3 0.519 5.6 0.748 7.9 0.898 1.1 0.041 3.4 0.531 5.7 0.756 8 0.903 1.2 0.079 3.5 0.544 5.8 0.763 8.1 0.908 1.3 0.114 3.6 0.556 5.9 0.771 8.2 0.914 1.4 0.146 3.7 0.568 6 0.778 8.3 0.919 1.5 0.176 3.8 0.58 6.1 0.785 8.4 0.924 1.6 0.204 3.9 0.591 6.2 0.792 8.5 0.929 1.7 0.23 4 0.602 6.3 0.799 8.6 0.934 1.8 0.255 4.1 0.613 6.4 0.806 8.7 0.94 1.9 0.279 4.2 0.623 6.5 0.813 8.8 0.944 2 0.301 4.3 0.633 6.6 0.82 8.9 0.949 2.1 0.322 4.4 0.643 6.7 0.826 9 0.954 2.2 0.342 4.5 0.653 6.8 0.833 9.1 0.959 2.3 0.362 4.6 0.663 6.9 0.839 9.2 0.964 2.4 0.38 4.7 0.672 7 0.845 9.3 0.968 2.5 0.398 4.8 0.681 7.1 0.851 9.4 0.973 2.6 0.415 4.9 0.69 7.2 0.857 9.5 0.978 2.7 0.431 5 0.699 7.3 0.863 9.6 0.982 2.8 0.447 5.1 0.708 7.4 0.869 9.7 0.987 2.9 0.462 5.2 0.716 7.5 0.875 9.8 0.991 3 0.477 5.3 0.724 7.6 0.881 9.9 0.996 3.1 0.491 5.4 0.732 7.7 0.886 10 1 3.2 0.505 5.5 0.74 7.8 0.892 One can immediately see how logarithms can be used to multiply numbers. For example, notice from our table that the logarithm of 1.5 is 0.176 and the logarithm of 3.6 is 0.556. The sum of the logarithms is 0.732, which we can see from the table is the logarithm of 5.4 = $$1.5\times$$ 3.6. Clearly many other examples can be found just from this simple table. Now suppose we wanted to multiply $$150\times$$ 3.6. We don’t need to have the logarithm of 150 in our table – the logarithm of 150 will just be 2 + the logarithm of 1.5, or 2.176. The result will just be $$10^2$$ times the result for $$1.5\times$$ 3.6, or 540. By keeping track of powers of ten, calculations can be performed by just using logarithms of numbers between 1 and 10.
# why square root of a positive number is positive? [duplicate] We have $(+3)^2=(-3)^2=9$. But why do we define $$\sqrt 9=+3?$$ Why $\sqrt9=-3$ is false? Thank you ## marked as duplicate by user147263, Yiorgos S. Smyrlis, RE60K, Najib Idrissi, Davide GiraudoOct 1 '14 at 18:31 We want $\sqrt{\cdot}$ to be a function on nonnegative reals. To be a function, it must have exactly one value for each input, and the most natural one to choose is the positive one. You are right, there are two $a's$ such that $a^2=9$. Instead of saying "Please give me the positive number $a$ such that $a^2=9$, we write $\sqrt{9}$. It's short-hand for the longer sentence. If we more often cared about getting the negative number $a$ such that $a^2=9$, we might come up with special notation for that case. • To get the negative number, you'd just write $-\sqrt\cdot$ rather than $\sqrt\cdot$. – Akiva Weinberger Oct 1 '14 at 16:22
# Generating sets of permutations October 21, 2011 By (This article was first published on From the bottom of the heap » R, and kindly contributed to R-bloggers) In previous posts I discussed how to generate a single permutation from a fully-randomised or restricted permutation design using shuffle(). Here I want to briefly mention the shuffleSet() function and illustrate it’s usage. Every time you call shuffle() it has to interpret the control list to identify the type of permutation required. Whilst the overhead of this interpretation is not too high, there is no reason that it need be incurred just to generate a set of permutations. This is where shuffleSet() comes in. It works exactly like shuffle() taking the number of observations and a control object but in addition it takes an extra argument nset which is the number of permutations required for the set. > args(shuffleSet) function (n, nset = 1, control = permControl()) NULL To generate 10 random permutations of ten observations you would use > set.seed(2) > shuffleSet(10, 10) [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [1,] 2 7 5 10 6 8 1 3 4 9 [2,] 6 3 7 2 9 5 4 1 10 8 [3,] 7 4 10 2 3 6 1 8 5 9 [4,] 1 2 7 8 4 6 5 10 9 3 [5,] 10 3 1 2 6 4 5 7 9 8 [6,] 1 10 6 7 2 5 4 3 8 9 [7,] 8 10 6 2 9 3 7 4 1 5 [8,] 3 10 1 2 7 4 6 9 8 5 [9,] 4 7 1 3 2 5 10 8 6 9 [10,] 10 4 9 8 3 1 2 5 6 7 If those 10 observations were collected as a time series and we wanted 10 restricted permutations you would use > set.seed(2) > shuffleSet(10, 10, control = permControl(within = Within(type = "series"))) [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [1,] 3 4 5 6 7 8 9 10 1 2 [2,] 9 10 1 2 3 4 5 6 7 8 [3,] 7 8 9 10 1 2 3 4 5 6 [4,] 3 4 5 6 7 8 9 10 1 2 [5,] 1 2 3 4 5 6 7 8 9 10 [6,] 1 2 3 4 5 6 7 8 9 10 [7,] 3 4 5 6 7 8 9 10 1 2 [8,] 10 1 2 3 4 5 6 7 8 9 [9,] 6 7 8 9 10 1 2 3 4 5 [10,] 7 8 9 10 1 2 3 4 5 6 From the above set of permutations, the cyclic shifts employed in the "series" permutation type is clear. One problem with the set we just produced is that the same permutation was returned more than once. In fact, there were only six unique permutations in the set requested. This is due to there only being 10 possible permutations of the numbers 1, 2, …, 10 if we allow cyclic shifts in a single direction > numPerms(10, control = permControl(within = Within(type = "series"))) [1] 10 shuffle() and shuffleSet() know nothing of these limits, but there are functions in the permute package that can tell you the number of possible permutations (numPerms()) and generate the entire set of permutations for a stated design (allPerms()). I’ll take a look at allPerms() in a future posting. I return now to the Golden Jackal mandible length example I used in an earlier post but update the example to make use of shuffleSet() instead of shuffle(). I will just show the code and output for the permutation test, refer to the previous post for details: > data(jackal) ## load the data > ## function to compute the difference of means > meanDif <- function(x, grp) { + mean(x[grp == "Male"]) - mean(x[grp == "Female"]) + } > N <- nrow(jackal) > set.seed(42) > ## generate the set of 4999 random permutations > pSet <- shuffleSet(N, 4999) > ## iterate over the set > Djackal <- apply(pSet, 1, function(i, data) with(data, meanDif(Length, Sex[i])), data = jackal) > Djackal <- c(Djackal, with(jackal, meanDif(Length, Sex))) > (Dbig <- sum(Djackal >= Djackal[5000])) [1] 12 > Dbig/length(Djackal) [1] 0.0024 The last two lines of R code compute the number of observations in the Null distribution with differences in mean mandible length as great or greater than the observed difference, and the resulting permutation p-value. These are the same as those computed in the previous post. Generating entire sets of permutations is useful for several reasons. One recent example that we came across is with the new parallel processing capabilities in the forthcoming version of R. We are able to generate a set of permutations and then distribute the process of the permutation test over a number of CPUs or worker threads, each dealing with a subset of the permutations we generated. This can greatly reduce the compute time needed for the permutation test, especially where the objective function is computationally complex, but allows us to not worry about controlling the random number generator in each separate process — this is all done within the main function and only the relevant subset of permutations is passed to each worker process. An additional reason for generating a set of permutation to work with rather than individual permutations is that it is easy to switch between using a set of randomly generated permutations or the set of all possible permutations where that set is not overly large. allPerms() returns the set of permutations in the same way that shuffleSet() does, so we can simplify our code if we write the test to iterate over a set of permutations. The full script for the Golden Jackal permutation test is shown below: data(jackal) ## load the data ## function to compute the difference of means meanDif <- function(x, grp) { mean(x[grp == "Male"]) - mean(x[grp == "Female"]) } N <- nrow(jackal) set.seed(42) ## generate the set of 4999 random permutations pSet <- shuffleSet(N, 4999) ## iterate over the set Djackal <- apply(pSet, 1, function(i, data) with(data, meanDif(Length, Sex[i])), data = jackal) Djackal <- c(Djackal, with(jackal, meanDif(Length, Sex))) (Dbig <- sum(Djackal >= Djackal[5000])) Dbig/length(Djackal) R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
For every rational number, does there exist a sequence of irrationals which converges to it? I can think of of examples where a sequence of irrationals converges to $0$. But if we pick any rational will there always exist a sequence of irrationals which converges to it? I cannot find a straight answer to this question. - Let $r$ be our rational. Look at $r+\frac{\sqrt{2}}{n}$. This may be the example you had in mind, "shifted" by $r$. –  André Nicolas Mar 11 '14 at 18:47 Assume your number is $\frac{p}{q}$. Then the sequence $$a_n=\frac{\pi}{n}+\frac{p}{q}$$ converges to the given number and is irrational (any irrational number in the place of $\pi$ would do). - this is assuming $\lim_{n\to\inf}$, right? –  Cole Johnson Mar 11 '14 at 23:47 @ColeJohnson Yes, right! –  Stef Mar 11 '14 at 23:49 Yes, take a sequence consisting of your sequence of irrationals converging to $0$ plus your desired rational limit. - Yes: If $r\in\Bbb Q$, then $\forall n\in\Bbb N$: ${rn\over n+\sqrt2}\in{\Bbb Q}^c$ and $$\lim_{n\to\infty}{rn\over n+\sqrt2}=r.$$ - For any rational number $x=\frac{p}{q}$ with $\gcd(p,q)=1$, just consider: $$x_n = \frac{p}{q}\cdot\frac{n}{\sqrt{n^2+1}}.$$ Clearly any $x_n$ belong to $\mathbb{R}\setminus\mathbb{Q}$ and we have $\lim_{n\to +\infty} x_n = x$. - Yes. Consider $\frac{p}{q} - \frac{\sqrt{2}}{n}$ -
# What is 0.218 As A Fraction? 0.218 as a fraction is a mathematical representation of a decimal number. It is one of the most common ways of expressing numbers in mathematics. Decimals are used to represent numbers that are between whole numbers, such as 0.218. A fraction is a way of expressing a number as a ratio of two numbers. To convert a decimal number to a fraction, you need to express the decimal number as a fraction with a denominator of 1. This means that 0.218 can be expressed as the fraction 21/100. ## How to Simplify 0.218 As A Fraction Simplifying a fraction is the process of reducing a fraction to its lowest terms. This is done by finding the greatest common factor of the numerator and the denominator and dividing both the numerator and denominator by it. In the case of 0.218 as a fraction, the greatest common factor is 7. Therefore, the fraction can be simplified to 3/25. This fraction, 3/25, is the simplest form of 0.218 as a fraction. ## How to Convert 0.218 As A Fraction to a Decimal Converting a fraction to a decimal is also a common operation in mathematics. To convert a fraction to a decimal, divide the numerator by the denominator. In the case of 0.218 as a fraction, the numerator is 3 and the denominator is 25. Therefore, when 3 is divided by 25, the answer is 0.12. This means that 0.218 as a fraction can be expressed as 0.12 as a decimal. ## How to Convert 0.218 As A Fraction to a Percent To convert a fraction to a percent, multiply the fraction by 100. In the case of 0.218 as a fraction, the fraction is 3/25. When multiplied by 100, the answer is 12%. This means that 0.218 as a fraction can be expressed as 12% as a percent. ## How to Convert 0.218 As A Fraction to a Mixed Number A mixed number is a number that is composed of a whole number and a fraction. To convert a fraction to a mixed number, divide the numerator by the denominator and then write the result as a mixed number. In the case of 0.218 as a fraction, the numerator is 3 and the denominator is 25. When 3 is divided by 25, the answer is 0.12. This means that 0.218 as a fraction can be expressed as 0 and 12/25 as a mixed number. ##  0.218 as a fraction is a mathematical representation of a decimal number. It can be expressed as 21/100 or simplified to 3/25. It can also be converted to a decimal, a percent, and a mixed number. Understanding how to convert decimals to fractions and fractions to decimals is an important skill in mathematics. Knowing how to simplify fractions and convert them to other forms is also important for solving mathematical problems.
# angle_style command ## Syntax angle_style style • style = none or hybrid or charmm or class2 or cosine or cosine/squared or harmonic ## Examples angle_style harmonic angle_style charmm angle_style hybrid harmonic cosine ## Description Set the formula(s) LAMMPS uses to compute angle interactions between triplets of atoms, which remain in force for the duration of the simulation. The list of angle triplets is read in by a read_data or read_restart command from a data or restart file. Hybrid models where angles are computed using different angle potentials can be setup using the hybrid angle style. The coefficients associated with a angle style can be specified in a data or restart file or via the angle_coeff command. All angle potentials store their coefficient data in binary restart files which means angle_style and angle_coeff commands do not need to be re-specified in an input script that restarts a simulation. See the read_restart command for details on how to do this. The one exception is that angle_style hybrid only stores the list of sub-styles in the restart file; angle coefficients need to be re-specified. Note When both an angle and pair style is defined, the special_bonds command often needs to be used to turn off (or weight) the pairwise interaction that would otherwise exist between 3 bonded atoms. In the formulas listed for each angle style, theta is the angle between the 3 atoms in the angle. Here is an alphabetic list of angle styles defined in LAMMPS. Click on the style to display the formula it computes and coefficients specified by the associated angle_coeff command. Click on the style to display the formula it computes, any additional arguments specified in the angle_style command, and coefficients specified by the associated angle_coeff command. There are also additional accelerated pair styles included in the LAMMPS distribution for faster performance on CPUs, GPUs, and KNLs. The individual style names on the Commands angle doc page are followed by one or more of (g,i,k,o,t) to indicate which accelerated styles exist. • none - turn off angle interactions • zero - topology but no interactions • hybrid - define multiple styles of angle interactions • charmm - CHARMM angle • class2 - COMPASS (class 2) angle • class2/p6 - COMPASS (class 2) angle expanded to 6th order • cosine - angle with cosine term • cosine/buck6d - same as cosine with Buckingham term between 1-3 atoms • cosine/delta - angle with difference of cosines • cosine/periodic - DREIDING angle • cosine/shift - angle cosine with a shift • cosine/shift/exp - cosine with shift and exponential term in spring constant • cosine/squared - angle with cosine squared term • cross - cross term coupling angle and bond lengths • dipole - angle that controls orientation of a point dipole • fourier - angle with multiple cosine terms • fourier/simple - angle with a single cosine term • harmonic - harmonic angle • mm3 - anharmonic angle • quartic - angle with cubic and quartic terms • sdk - harmonic angle with repulsive SDK pair style between 1-3 atoms • table - tabulated by angle ## Restrictions Angle styles can only be set for atom_styles that allow angles to be defined. Most angle styles are part of the MOLECULE package. They are only enabled if LAMMPS was built with that package. See the Build package doc page for more info. The doc pages for individual bond potentials tell if it is part of a package. ## Default angle_style none
# Ask the name of a combinatorial theorem It is a classical theorem. For given integer $n \ge 1$, among ${n\choose{n/2}} = 2^{(1-o(1)n)}$ strings in the cube $\{0, 1\}^n$ with weights $n/2$, i.e., $n/2$ indices are 1, there are at least $2^{cn}$ of these strings such that each pair has Hamming distance at least $n/4$, where $c$ is a constant between $0$ and 1. This is for sure a known result. I hope to be aware of its name. I don't have a name for the theorem, but I can give a quick proof in case you don't get hold of a name: Let $S$ be a maximal set of $n/4$-separated strings in the weight $n/2$ slice. Then the union of $n/4$-Hamming balls centred at elements of $S$ covers the entire slice. But each Hamming ball has $\binom{n}{0}+\ldots+\binom{n}{n/4}\sim 2^{an}$ elements, where $a=-\frac 14\log_2(\frac14)-\frac 34\log_2(\frac 34)<1$. Hence $S$ must consist of at least $2^{(1-a)n}$ elements.
1. Mar 1, 2012 jaja1990 In a try to test my knowledge of Newton's laws, I have posed the following question to myself. My attempt: (The arrows indicate direction.) I have been informed that the solution is invalid, and that the question itself is wrong. Can you tell me why the problem is wrong/invalid? Someone tried to explain to me, but I didn't really understand. 2. Mar 1, 2012 Staff: Mentor For one thing:
How to compute the first eigenvalue of $M = R \times {}_{\cosh t}N$ - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-21T09:27:08Z http://mathoverflow.net/feeds/question/119831 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/119831/how-to-compute-the-first-eigenvalue-of-m-r-times-cosh-tn How to compute the first eigenvalue of $M = R \times {}_{\cosh t}N$ jiangsaiyin 2013-01-25T13:40:09Z 2013-01-27T06:03:12Z <p>$$M = R \times N$$with the warped product metric$$d{s^2} = d{t^2} + {\cosh ^2}\left( t \right)ds_N^2$$where N(dimN=n-1) is a compact manifold with $$Ric \ge - \left( {n - 2} \right)$$It should be mentioned that M may not be a Riemannian manifold but an Alexandrov space.So how to compute the first eigenvalue of M?If we restrict to the case $$N = {S^{n - 1}}\left( {\frac{1}{2}} \right)$$an n-1 dim sphere with radius 1/2,then the result?</p>
# What does pandas describe() percentiles values tell about our data? Let say this is my dataframe x=[0.09, 0.95, 0.93, 0.93, 0.34, 0.29, 0.14, 0.23, 0.91, 0.31, 0.62, 0.29, 0.71, 0.26, 0.79, 0.3 , 0.1 , 0.73, 0.63, 0.61] x=pd.DataFrame(x) When we x.describe() this dataframe we get result as this >>> x.describe() 0 count 20.000000 mean 0.50800 std 0.30277 min 0.09000 25% 0.28250 50% 0.47500 75% 0.74500 max 0.95000 What is meant by 25,50, and 75 percentile values? Is it saying 25% of values in x is less than 0.28250? • I have updated my answer. I'll be glad if you take a look since I assume my previous illustration was misleading. – Fatemeh Asgarinejad May 27 '19 at 7:32 It describes the distribution of your data. 50 should be a value that describes „the middle“ of the data, also known as median. 25, 75 is the border of the upper/lower quarter of the data. You can get an idea of how skew your data is. Note that the mean is higher than the median, which means your data is right skewed. Try: import pandas as pd x=[1,2,3,4,5] x=pd.DataFrame(x) x.describe() First, seemingly, the describe table is not the description of your array x. then, you need to sort your array (x), then calculate the location of your percentage ( which in .describe method p is 0.25, 0.5 and 0.75), sorted_x = [0.09, 0.1 , 0.14, 0.23, 0.26, 0.29, 0.29, 0.3 , 0.31, 0.34, 0.61, 0.62, 0.63, 0.71, 0.73, 0.79, 0.91, 0.93, 0.93, 0.95] and the element in the which is located in 25th percentage is achieved when we divide the list to 25 and 75 percent, the shown | is 25% here: sorted_x = [0.09, 0.1 , 0.14, 0.23, 0.26,**|** 0.29, 0.29, 0.3 , 0.31, 0.34, 0.61, 0.62, 0.63, 0.71, 0.73, 0.79, 0.91, 0.93, 0.93, 0.95] So the value is calculated as $$0.26 + (0.29-0.26)*\frac{3}{4}$$ which equals $$0.28250000000000003$$
# Spring Constant Formula According to Hooke’s law, the force required to compress or extend a spring is directly proportional to the distance it is stretched. ## Spring Constant Formula The formula of spring constant is given as: Formula F=-kx SI unit N.m-1 Where, • F is the restoring force of the spring directed towards the equilibrium • k is the spring constant in N.m-1 • x is the displacement of the spring from its equilibrium position In other words, the spring constant is the force applied if the displacement in the spring is unity. If a force F is considered that stretches the spring so that it displaces the equilibrium position by x. ## Spring Constant Dimensional Formula We know that, F=-kx Therefore, $k=\frac{F}{x}$ Dimension of F=[MLT-2] Dimension of x= [L] Therefore, dimension of k=$k=\frac{[MLT^{-2}]}{[L]}=[MT^{-2}]$ The Spring Constant Formula is given as, $k=\frac{F}{x}$ where, • F = Force applied, • x = displacement by the spring It is expressed in Newton per meter (N/m). ### Solved Examples Example 1 A spring with load 5 Kg is stretched by 40 cm. Determine its spring constant. Solution: Given: Mass m = 5 Kg Displacement x = 40 cm We know that, Force F = m a = 5 × 0.4 = 2 N The spring constant is given as: $k=\frac{F}{x}$ = – 2 / 0.4= – 5 N/m Example 2 A boy weighing 20 pounds stretches a spring by 50 cm. Determine the spring constant of the spring. Solution: Given: Mass m = 20 lbs = 20 / 2.2 = 9.09 Kg Displacement x = 50 cm The force F = ma = 9.09 × 9.8 = 89.082 N The spring constant formula is given by: $k=\frac{F}{x}$ = – 89.082 / 0.5 = – 178.164 N/m.
# What is the 'sense' of a vector? In my country we are taught of vectors as if they have three components, module (the length), direction (slope of the line that contains the vector), and 'sense' (sentido), which indicates the "way" or "sense" that the vector "goes". The vector $\vec{u}(5,6)$ has the same sense as $\vec{v}(10,12)$, and the opposite of $\vec{w}(-5,-6)$. My question is: What is really the thing that we're talking about? Is it a number? What are the values that 'sense' can be? Is it either 'the same', 'opposite', or 'different'? What is the sense of $\vec{z}(5,4)$ compared to the first vector? Does it only exist while we're comparing vectors? • I would usually consider “sense” as part of the direction of a vector: e.g., $(5,6)$ and $(-5,-6)$ point in different directions. Sense the way you’re using it only really makes sense when comparing parallel vectors. (What is the sense of $(-5,6)$ compared to $(6,5)$, say?) – amd Apr 5 '18 at 23:21 • @amd That's what I'm asking about, from the exercises and the (short) material that I've seen, they would be just "different". – Nick Cassol Apr 5 '18 at 23:33 • The answer could just as well be “undefined.” Alternatively, you could look at the projection of the second vector onto the first and use that to determine if the have the “same” sense, i.e., if they loosely point in the same direction relative to a boundary that’s perpendicular to the first vector. – amd Apr 5 '18 at 23:37 • The answer could just as well be “undefined.” Alternatively, you could look at the projection of the second vector onto the first and use that to determine if the have the “same” sense, i.e., if they loosely point in the same direction relative to a boundary that’s perpendicular to the first vector. – amd Apr 5 '18 at 23:37 Sense is a set of all half lines who have the same orientation and by pairs belong to the same part of the half plane they define. What is really the thing that we're talking about? It is a set of half lines. A vector has sense A if the half line taken by extending the line segment from the end belongs to A. Is it a number? No. What are the values that 'sense' can be? This makes no sense (haha) if we define it as a set. Is it either 'the same', 'opposite', or 'different'? You need to rephrase those in terms of set relations and/or new relations you define. What is the sense of 𝑧⃗ (5,4) compared to the first vector? A disjoint set and whatever else you want to define. Does it only exist while we're comparing vectors? No. PS: I found this topic really interesting since I was taught the same thing and had a similar question. The best mathematical translation of the Spanish sentido is direction rather than sense. It makes sense to say that two collinear vectors have the same or have opposite directions, but, absent some additional external reference, it does not make sense to speak of the direction of a vector or to say that two noncollinear vectors have or have not the same direction. Given a vector, $$(x, y)$$, one can regard the unit vector $$\tfrac{(x, y)}{\sqrt{x^{2} + y^{2}}}$$ having the same direction as $$(x, y)$$ as its direction vector (or its sense).
## appleduardo Group Title what is the integral of e^(senx) 4cosx dx ? how can i solve it? one year ago one year ago 1. appleduardo $\int\limits_{}^{}e ^{sen x} 4\cos x dx$ 2. geerky42 sen? You mean sec? 3. satellite73 try $$u=\sin(x), du=\cos(x)dx$$ and you get it in one step 4. appleduardo i got $[e^{sen x} +c] [4 sen x + c]$ is that correct? 5. appleduardo i meant "sin": 6. tkhunny $$\int e^{\sin(x)}\cdot 4\cos(x)\;dx$$ Following satellite73 suggestion u = sin(x) du = cos(x)dx This gives $$\int e^{u}\cdot 4\;du = 4\cdot e^{u} + C$$ Substitute back to where we started. $$4\cdot e^{\sin(x)} + C$$ Be careful, consistent, and confident. 7. appleduardo thank you so much! but what happened with cos ? 8. tkhunny It's all in there with the nature of the substitution. See the definition of du. 9. appleduardo so in this case cos represents the derivative for sin in the formula , right? 10. tkhunny That is where it came from. You can't just substitute a function. The nature of dx changes when you do that. Is English your first language? The answer to this question might help other folks understand where "sen(x)" came from. 11. appleduardo haha yeah, uhmm but right now im studying in a spanish-speaking country, so sometimes (unconsciously) isay or write spanish :/ . thank you so! 12. tkhunny No worries - as long as you don't mind freaking people out when you accidentally write the spanish versions of things. Good work!
PortOpt [Portfolio Optimizer] is an open-source (LGPLed) C++ program (with Python binding) implementing the Markowitz(1952) mean-variance model with agent's linear indifference curves toward risk in order to find the optimal assets portfolio under risk. You have to provide PortOpt (in text files or - if you use the api- using your own code) the variance/covariance matrix of the assets, their average returns and the agent risk preference. It returns the vector of assets' shares that compose the optimal portfolio. In order to minimise the variance it internally uses QuadProg++, a library that implement the algorithm of Goldfarb and Idnani for the solution of a (convex) Quadratic Programming problem by means of an active-set dual method. Windows executable (command line tool!), linux (x64) executable and source code (for C++/Python) are available from SourceForge. Bugs and support If you find a bug, request a feature or need support open a ticket or discuss it in the SourceForge pages. Theorical Background In portfolio theory agents attempts to maximise portfolio expected return for a given amount of portfolio risk, or equivalently to minimise risk for a given level of expected return. The portfolio management can be portrayed graphically as in the above Figure, where the feasible set of variance-profitability combinations in enclosed by the blue curve and the B-D segment represents the efficient frontier, where no variance can be lowered at productivity's price or equivalently no productivity can be increased at price of increasing variance. In order to simplify computations, forest managers are assumed to have risk aversion with simple linear preferences, that is they are willing to trade off variance with productivity proportionally. In such case the indifference curves can be drawn like a bundle of straight lines having equation $prod = \alpha * var + \beta$, where $\alpha$ is the linear risk aversion coefficient and both $prod$ and $var$ refer to the overall portfolio's productivity and variance. Point $B$ represents the point having the lowest possible portfolio variance. Agents with $\alpha$ risk aversion will choose however the tangent point $C$ that can be obtained by solving the following quadratic problem: $$\begin{array}{rrrll} \max_{x_i, \beta} & Y & = & \beta & \\ s.t. & & & & \\ & x_i & \geqslant & 0 & \forall i\\ & \sum_i x_i & = & 1 & \\ & \sum_i {x_i p_i} & = & \alpha \sum_i { \sum_j { x_i x_j \sigma_{i,j}}} + \beta & \\ \end{array} \label{eq:optimisation_problem1}$$ that by substitution become: $$\begin{array}{rrrll} \min_{x_i} & Y & = & \alpha \sum_i { \sum_j { x_i x_j \sigma_{i,j}}} - \sum_i {x_i p_i} & \\ s.t. & & & & \\ & x_i & \geqslant & 0 & \forall i\\ & \sum_i x_i & = & 1 & \\ \end{array} \label{eq:optimisation_problem2}$$ where $x_i$ is the share of the asset $i$, $p_i$ is its productivity, $\sigma_{i,j}$ is the covariance between assets $i$ and $j$ and hence $\sum_i {x_i p_i}$ is the overall portfolio productivity and $\sum_i { \sum_j { x_i x_j \sigma_{i,j}}}$ is its variance. As the only quadratic term arises when $i=j$ and $\sigma_{i,j}$ being the variance is always positive, the problem is convex and hence easily numerically solved. Compilation (not needed if using a pre-compiled version) Option 1 - as a stand-alone program g++ -std=c++0x -O -o portopt_executable QuadProg++.cpp Array.cpp anyoption.cpp portopt.cpp main.cpp Option 2 - as a library to be used in your C/C++ program g++ -fPIC -std=c++0x -O -c QuadProg++.cpp Array.cpp anyoption.cpp portopt.cpp main.cpp g++ -std=c++0x -O -shared -Wl,-soname,portopt.so -o portopt.so QuadProg++.o Array.o anyoption.o portopt.o g++ -std=c++0x -O -o portopt_executable main.o -Wl,-rpath,'\$ORIGIN' -L . portopt.so ./portopt_executable Option 3 - as a lib to be used in python, using swig swig -c++ -python portopt.i g++ -fPIC -std=c++0x -O -c QuadProg++.cpp Array.cpp anyoption.cpp portopt.cpp portopt_wrap.cxx main.cpp -I/usr/include/python2.7 -I/usr/lib/python2.7 g++ -std=c++0x -O -shared -Wl,-soname,_portopt.so -o _portopt.so QuadProg++.o Array.o anyoption.o portopt.o portopt_wrap.o (then please refer to the python example for usage) If you want to change the output library name (e.g. you want to create _portopt_p3.so for python3 alongside _portopt.so for python2), do it in the %module variable of portopt.i and in the -soname and -o options of the linking command (and don't forget to use the right python included directory in the compilation command). You can then load the correct module in your script with something like: import sys if sys.version_info < (3, 0): import portopt else: import portopt_p3 as portopt Usage Please notice that the API changed from version 1.1, with the introduction of the port_opt_mean and port_opt_var parameters (both by reference). For the old 1.1 call instructions see here. Linux ./portopt [options] Windows portopt.exe [options] (from a DOS prompt: (a) START→run→“cmd”; (b) cd \path\to\portopt ) As a lib from C++: Call: double solveport (const vector< vector <double> > &VAR, const vector<double> &MEANS, const double &alpha, vector<double> &x_h, int &errorcode, string &errormessage, double &port_opt_mean, double &port_opt_var, const double tollerance = 0.000001) As a lib using Python: import portopt results = portopt.solveport(var,means,alpha,tolerance) # tolerance is optional, default to 0.000001 functioncost = results[0] shares = results[1] errorcode = results[2] errormessage = results[3] opt_mean = results[4] opt_var = results[5] Options -h --help Prints this help -v --var-file [input_var_file_name] Input file containing the variance/covariance matrix (relative path) -m --means-file [input_means_file_name] Input file containing the means vector (relative path) -a --alpha [alpha_coefficient] Coefficient between production and risk in the linear indifference curves -f --field-delimiter [field_delimiter] Character to use as field delimiter (default: ';') -s --decimal-separator [decimal-separator] Character to use as decimal delimiter (default: '.') -t --tollerance [tolerance] A tolerance level to distinguish from zero (default: 0.000001) Notes • Higher the alpha, lower the agent risk aversion; • Set a negative alpha to retrieve the portfolio with the lowest possible variance; • Set alpha to zero to retrieve the portfolio with the highest mean, independently from variance (solution not guaranteed to be unique); • Assets shares are returned in the x_h vector, eventual error code (0: all fine, 1: input data error, 2: no solutions, 3: didn't solve, 4: solver internal error) in the errorcode parameter. • Use option “tollerance” with two l up to version 1.1 included PortOpt is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. PortOpt is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with PortOpt. If not, see http://www.gnu.org/licenses. Citations If you use this program or a derivative of it in an academic framework, please cite it!
Requests for technical support from the VASP group should be posted in the VASP-forum. # Harris Foulkes functional ${\displaystyle E_{\mathrm {HF} }[\rho _{\mathrm {in} },\rho ]=\mathrm {bandstructure} \mathrm {for} (V_{\mathrm {in} }^{H}+V_{\mathrm {in} }^{xc})+\mathrm {Tr} [(-V_{\mathrm {in} }^{H}/2-V_{\mathrm {in} }^{xc})\rho _{\mathrm {in} }]+E^{xc}[\rho _{\mathrm {in} }+\rho _{c}].}$ It is interesting that the functional gives a good description of the binding-energies, equilibrium lattice constants, and bulk-modulus even for covalently bonded systems like Ge. In a test calculation we have found that the pair-correlation function of l-Sb calculated with the HF-function and the full Kohn-Sham functional differs only slightly. Nevertheless, we must point out that the computational gain in comparison to a self-consistent calculation is in many cases very small (for Sb less than ${\displaystyle 20~\%}$). The main reason why to use the HF functional is therefore to access and establish the accuracy of the HF-functional, a topic which is currently widely discussed within the community of solid state physicists. To our knowledge VASP is one of the few pseudo-potential codes, which can access the validity of the HF-functional at a very basic level, i.e. without any additional restrictions like local basis-sets etc.
## AravindG 4 years ago doubts on newton laws of motion 1. AravindG k thx for cming 2. AravindG how many types of inertia are there? 3. AravindG easmore? 4. anonymous Typically, we consider two. That associated with linear motion and that associated with rotational motion. 5. AravindG oh i was asking like inertia of rest ,inertia of motion,inertia of direction etc 6. anonymous They are all the same. Inertia is simply a measure of how an object resists a change in motion. 7. AravindG hmm. can u xplain inertia of direction and based on tht reason this: wheels of vehicles are provided with mudguards 8. anonymous The mud has a tendency to always maintain the same direction of travel. Eventually, this tendency exceeds the forces acting on the mud by the tire. When this happens, the mud will leave the tire in a tangential manner. 9. AravindG doubt 2:a body of mass 2kg moves with an acceleration of 3 m/s^2 find change in momentum in in one second 10. AravindG y is it the answer force ma? 11. anonymous That is the definition of Newton's Second Law. $F = {d \vec p \over dt}$ 12. AravindG doubt 3:what is average force? 13. anonymous $F_{avg} = {\Delta \vec p \over \Delta t}$ 14. AravindG doubt 4:A shell explodes in mid air into 2 equal fragments.what is direction of motion of 2 particles .explain this 15. anonymous This is a result of internal forces only. Since Newton's Second Law considers only external forces, there are couple inferences we can draw. Let's assume the shell is at rest when it explodes. First, the center of mass of the two particles will remain at the same point in space. Second, the net momentum of the two particles will equal zero (because the object is initially at rest). 16. AravindG can u xplain more i am cnfused 17. anonymous The exact direction and nature of the motion of the two particles after the explosion is dependent on the nature of the explosion force. 18. AravindG hw can we make such an assumption 19. AravindG y dont we consider gravity as external force? 20. AravindG ? 21. anonymous Gravity can be considered. The center of mass of the two particles will follow the path prescribed by the external forces. 22. anonymous This is definitely classical mechanics. 23. AravindG eashmore? 24. anonymous Yes? 25. AravindG xplain 26. anonymous 27. anonymous Let's keep things simple for now. Air resistance will change things. 28. AravindG wel wat is direction of motion 29. anonymous The motion of the individual particles is dependent on the nature of the explosion. The center of mass however, will have motion as described by external forces. 30. AravindG my text says they move in opp directions 31. AravindG for cinservation of momentum 32. anonymous Their net momentum after the explosion must equal the momentum of the shell before explosion. Because momentum is a vector quantity, they will travel away from each other (i.e. their velocities will have opposite signs). 33. AravindG we assume it was at rest?? 34. anonymous Let's take the simplest example. Take the shell to be on the ground, which is smooth. After the explosion the net momentum will be zero. 35. AravindG mth3v4 pls dont mess around 36. anonymous If the shell is in motion. The net momentum of the two particles will equal the momentum of the shell before the explosion. 37. AravindG k 38. AravindG hw do we recognize an internal force ?? u see i am vonfused with that 39. anonymous I already established this as being classical mechanics. Please don't comment if you don't have anything valuable to add. 40. anonymous I don't understand the question Aravind 41. AravindG wel u see an example like i hav a fan kept in a boat 42. AravindG then i turn the fan on facing the sail 43. anonymous Nvm. I understand. An internal force is one that does not mechanical energy of a system. 44. AravindG the boat doesnt move 45. anonymous An internal force is one that does not change the mechanical energy of a system. 46. AravindG hw do we recognize that . u seei thought he wind from fan can move the boat 47. anonymous Newton's Third Law. The fan pushes the wind against the sail, but the wind pushes back on the fan. Since the fan and sail are coupled by the boat, the boat doesn't move. 48. AravindG ya u r spam 49. AravindG can u show a fig 50. AravindG eashmore? 51. anonymous |dw:1328338036916:dw| 52. AravindG 53. anonymous Those are forces. The square below represents the boat system. There are not external forces. |dw:1328338113551:dw| 54. AravindG srry easmore bt i am not getting u 55. AravindG :( 56. AravindG i dont understand hww a return force acts on fan 57. anonymous Do you doubt the existence of the return force or what causes the return force? 58. anonymous Instead of a fan. Let's say you push directly on the sail mast. In this case, would the boat move? 59. anonymous I have to leave soon. 60. AravindG hey gogind can we cntinu with this discussion? 61. AravindG k pls xplain 62. anonymous hmm..lets see...You are having trouble with imagining the force that is acting on the fan, correct? 63. AravindG ys 64. anonymous Try imagine in like this. Fan pushes a bunch of molecules to the sail of the boat (we call that wind) , but since all these little molecules have some mass, newtons 3rd laws says that: As the blade of the fan exerts force on the molecules, all those molecules must exert the same force on the blade but it in opposite direction. OK! So now there is a bunch of molecules traveling towards the sail and eventually hitting it. When they hit the sail the same thing happens, they exert force on the sail, and since the sail is connected to the boat, they actually exert in on the boat. So in conclusion: The boat wants to go backwards, because of the force molecules exert on the blade of the fan which is connected to the boat, but the same molecules hit the sail which is also connected to the boat, causing the boat wanting to go forward. So the net result of this two forces is 0, because they have the same magnitude but opposite direction. What do you think would happen if there was no sail? 65. AravindG hmmm..it would have remained in rest 66. AravindG isnt it? 67. anonymous if there IS a sail it would remain at rest, because the forces cancel each other. If there were NO sail at all, the boat would travel backwards. In the boat with no sail, molecules that acted on the blade of the fan would just travel in the opposite direction of the boat with nothing to stop them (no sail), so the only force that is acting on the boat is the one acting on the fan, since there is not force to cancel in out the boat would move backwards. 68. AravindG k i understood 69. AravindG but is it sure that alll molecules from fan would hit the sail?? 70. anonymous of course not. What we are talking about here is the ideal situation. In real life a good part of molecules would not hit sail. Imagine you had a really small sail and a huge fan, some molecules would hit the sail but a bigger part of them would not. So the net effect would be that you are moving backwards. In this case the force acting on the blades is much garter then the one acting on the sail... 71. AravindG k i understood it cmpletely thx 72. AravindG i hav some othr doubts too can u help? 73. anonymous I don't have time, since I'm studying right now. Just post the question in the group and someone will answer, if they don't I'll answer later if I can 74. AravindG wen will u be free? 75. anonymous I don't know, when I get tired i guess :D. Just post on the group I'm sure someone will take a look at it 76. anonymous what doubts??
Department of # Mathematics Seminar Calendar for events the day of Thursday, February 17, 2005. . events for the events containing Questions regarding events or the calendar should be directed to Tori Corkery. January 2005 February 2005 March 2005 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 1 2 3 4 5 1 2 3 4 5 2 3 4 5 6 7 8 6 7 8 9 10 11 12 6 7 8 9 10 11 12 9 10 11 12 13 14 15 13 14 15 16 17 18 19 13 14 15 16 17 18 19 16 17 18 19 20 21 22 20 21 22 23 24 25 26 20 21 22 23 24 25 26 23 24 25 26 27 28 29 27 28 27 28 29 30 31 30 31 Thursday, February 17, 2005 1:00 pm in 241 Altgeld Hall,Thursday, February 17, 2005 #### Congruences for the coefficients of weakly holomorphic modular forms, II ###### Stephanie Treneer (UIUC) Abstract: Recent works have used the theory of modular forms to establish linear congruences for the partition function and for traces of singular moduli. We show that this type of phenomena is completely general, by finding similar congruences for the coefficients of any weakly holomorphic modular form. In particular, we give congruences for a wide class of partitions functions and for traces of CM values of arbitrary modular functions on certain congruence subgroups of prime level. Tuesday's talk will consist of an introduction to the problem, a statement of the main theorems, and a discussion of the two applications. Thursday we will prove the main theorems. 1:00 pm in Altgeld Hall 347,Thursday, February 17, 2005 #### The intersection form and geodesic currents on free groups ###### Ilya Kapovich (UIUC) Abstract: The notion of a geometric intersection number between free homotopy classes of closed curves on surfaces plays a pivital role in Thurston's treatment of the Teichmuller space and of the dynamics of surface homeomorphisms. In particular, Bonahon proved that this notion extends to a symmetric and bilinear notion of intersection number between two geodesic currents on a hyperbolic surface. We investigate to what extend these ideas are applicable in the free group context. Thus we define and study an Out(F_n)-equivariant "intersection form" on the product of the (non-projectivized) Culler-Vogtmann outer space and the space of geodesic currents on a free group. We also find an obstruction, arising from non-symmetric behaviour of generic stretching factors of free group automorphisms, to the existence of a symmetric notion of an intersection number between two geodesic currents on a free group. 2:00 pm in Altgeld Hall 243,Thursday, February 17, 2005 #### Testing analyticity on circles ###### Alex Tumanov (UIUC) Abstract: Consider a continuous one parameter family of circles in complex plane that contains two circles lying in the exterior of one another. Under mild assumptions on the family, we prove that if a continuous function on the union of the circles extends holomorphically into each circle, then the function is holomorphic. This result partially answers a question that has been open for about 30 years. 3:00 pm in 345 Altgeld Hall,Thursday, February 17, 2005 #### Spectral properties of a polyharmonic operator with limit-periodic potential in dimension two ###### Young-Ran Lee (UIUC Math) Abstract: We consider a polyharmonic operator: $$H=(-\Delta)^l+\sum_{n=1}^{\infty} V_n(x),$$ where $V_n(x)$ is periodic with the periods growing exponentially as $2^n$ and the $L_{infty}$-norm decaying super-exponentially. We have shown that when $l>6$, a generalized version of the Bethe-Sommerfeld conjecture holds for this operator, in other words, its spectrum contains a semi-axis. We have proved also that there are eigenfunctions which are close to plane waves. (joint work with Yulia Karpeshina) 4:00 pm in 245 Altgeld Hall,Thursday, February 17, 2005
# Q.1 Write shaded portion as fraction. Arrange them in ascending and descending order using correct sign ‘’ between the fractions: (c) Show$$\frac{2}{6} , \frac{4}{6} , \frac{8}{6} and \frac{6}{6}$$ on the number line. Put appropriate signs between the fractions given. $$\frac{5}{6} \square \frac{2}{6} , \frac{3}{6} \square 0, \frac{1}{6} \square \frac{6}{6} , \frac{8}{6} \square \frac{5}{6} Answer : (a) \(\frac{3}{8} , \frac{6}{8} , \frac{4}{8} and \frac{1}{8}$$ Ascending order = $$\frac{1}{8} < \frac{3}{8} < \frac{4}{8} < \frac{6}{8}$$ Decending order = $$\frac{6}{8} > \frac{4}{8} > \frac{3}{8} > \frac{1}{8}$$ (b) $$\frac{8}{9} , \frac{4}{9} , \frac{3}{9} and \frac{6}{9}$$ Ascending order = $$\frac{3}{9} < \frac{4}{9} < \frac{6}{9} < \frac{8}{9}$$ Decending order = $$\frac{8}{9} > \frac{6}{9} > \frac{4}{9} > \frac{3}{9}$$ (c) \(\frac{5}{6} > \frac{2}{6} , \frac{3}{6} > 0, \frac{1}{6} < \frac{6}{6} , \frac{8}{6} > \frac{5}{6}
# Ask Uncle Colin: An Infinite Sum Dear Uncle Colin I've been asked to find $\sum_3^\infty \frac{1}{n^2-4}$. Obviously, I can split that into partial fractions, but then I get two series that diverge! What do I do? - Which Absolute Losers Like Infinite Series? Hi, WALLIS, and thanks for your message! Hey! I'm an absolute loser who likes infinite series, thank you very much! As you say, you've split the sum into partial fractions: $\frac{1}{(n-2)(n+2)} = \frac{1}{4(n-2)} - \frac{1}{4(n+2)}$. If you write the first few terms out, you get $\left( \frac{1}{4\times1} - \frac{1}{4\times 5}\right) + \left( \frac{1}{4\times2} - \frac{1}{4\times 6}\right) + \left( \frac{1}{4\times3} - \frac{1}{4\times 7}\right) + \left( \frac{1}{4\times4} - \frac{1}{4\times 8}\right) + \left( \frac{1}{4\times5} - \frac{1}{4\times 9}\right) + ...$ The $\frac{1}{4\times 5}$ terms - and, in fact, all of the terms after it - occur as both positive and negative, so they disappear. You're left with $\frac{1}{4} + \frac{1}{8} + \frac{1}{12} + \frac{1}{16} = \frac{12+6+4+3}{48} = \frac{25}{48}$. Hope that's made you like infinite series a little more! I think they're rather neat - especially this trick of the telescoping sum. - Uncle Colin ## Colin Colin is a Weymouth maths tutor, author of several Maths For Dummies books and A-level maths guides. He started Flying Colours Maths in 2008. He lives with an espresso pot and nothing to prove. #### Share This site uses Akismet to reduce spam. Learn how your comment data is processed.
# Theory data files In the nnpdf++ project, FK tables (or grids) are used to provide the information required to compute perturbative QCD cross sections in a compact fashion. With the FK method a typical hadronic observable data point $$\mathcal{O}$$, is computed as, $$\mathcal{O}_d= \sum_{\alpha,\beta}^{N_x}\sum_{i,j}^{N_{\mathrm{pdf}}} \sigma^{(d)}_{\alpha\beta i j}N_i^0(x_\alpha)N_j^0(x_\beta)$$. where $$\sigma_{\alpha\beta i j}^{(d)}$$, the FK table, is a five index object with two indices in flavour ($$i$$, $$j$$), two indices in $$x$$ ($$\alpha$$, $$\beta$$) and a data point index $$d$$. $$N^0_i({x_\alpha})$$ is the $$i^{\mathrm{th}}$$ initial scale PDF in the evolution basis at $$x$$-grid point $$x=x_\alpha$$. Each FK table has an internally specified $$x$$-grid upon which the PDFs are interpolated. The full 14-PDF evolution basis used in the FK tables is given by: $$\left\{ \gamma, \Sigma,g,V,V3,V8,V15,V24,V35,T3,T8,T15,T24,T35\right\}$$. Additional information may be introduced via correction factors known internally as $$C$$-factors. These consist of data point by data point multiplicative corrections to the final result of the FK convolution $$\mathcal{O}$$. These are provided by CFACTOR files, typical applications being the application of NNLO and electroweak corrections. For processes which depend non-linearly upon PDFs, such as cross-section ratios or asymmetries, multiple FK tables may be required for one observable. In this case information is provided in the form of a COMPOUND file which specifies how the results from several FK tables may be combined to produce the target observable. In this section we shall specify the layout of the FK, COMPOUND and CFACTOR files. ## FK table compression It is important to note that the FK table format as described here pertains to the uncompressed tables. Typically FK tables as found and read by the NNPDF code are compressed individually with gzip. ## FK preamble layout The FK preamble is constructed by a set of data segments, of which there are two configurations. The first configuration consists of a list of key-value pairs, and the second is a simple data ‘blob’ with no requirements as to its formatting. Each segment begins with a delineating line which for key-value pairs is _SegmentName_____________________________________________ and for data blobs is {SegmentName_____________________________________________ The key difference being in the first character, underscore (_) for key-value pair segments, and open curly brace ({) for data blobs. The name of the segment is specified from the second character, to a terminating underscore (_). The line is then typically padded out with underscores up to 60 characters. Following this delineating line, for a key-value segment, the following lines must all be of the format *KEY: VALUE with the first character required to be an asterisk (*), then specifying the key, and value for that segment. For blob-type segments, no constraints are placed upon the format, aside from that each line must not begin with one of the delineating characters { or _, as these will trigger the construction of a new segment. While the user may specify additional segments, both key-value pair and blob-type for their own use, there are seven segments required by the code. These are, specified by their segment name: • GridDesc [BLOB] This segment provides a ‘banner’ with a short description for the FK table. The contents of this banner are displayed when the table is read from file. • VersionInfo [K-V] A list specifying the versions of the various pieces of code used in the generation of this FK table (minimally libnnpdf and apfel). • GridInfo [K-V] This list specified various architectural points of the FK table. The required keys are specified in FK configuration variables. • TheoryInfo [K-V] A list of all the theory parameters used in the generation of the table. The required keys are specified in Theory parameter definitions. • FlavourMap [BLOB] The segment describes the flavour structure of the grid by means of a flavour map. This map details which flavour channels are active in the grid, using the basis specified here. For DIS processes, an example section would be {FlavourMap_____________________________________________ 0 1 1 0 0 0 0 0 0 0 1 0 0 0 which specifies that only the Singlet, gluon and $$T_8$$ channels are populated in the grid. In the case of hadronic FK tables, the full $$14\times 14$$ flavour combination matrix is specified in the same manner. Consider the flavourmap for the CDFR2KT Dataset: {FlavourMap_____________________________________________ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 This flavourmap contains 9 nonzero entries, demonstrating the importance of only computing those flavour combinations that are relevant to the process. Additionally this map instructs the nnpdf++ convolution code as to which elements of the FastKernel grid should be read, to minimise holding zero entries in memory. • xGrid [BLOB] This segment defines the $$x$$-grid upon which the FK grid is defined, given as an $$N_x$$ long list of the $$x$$-grid points. This grid should be optimised to minimise FK grid zeros in $$x$$-space. The blob is a simple list of the grid points, here is an example of an $$x$$-grid with $$N_x=5$$ entries: {xGrid_____________________________________________ 0.10000000000000001 0.13750000000000001 0.17499999999999999 0.21250000000000002 1.00000000000000000 For examples of complete DIS and hadronic FK table headers, see Example: FK preamble. ## FK grid layout To start the section of the file with the FK grid itself, we begin with a blob-type segment delineator: {FastKernel_____________________________________________ The grid itself is now written out. For hadronic data, the format is line by line as follows: $$d \:\: \alpha \:\: \beta \:\: \sigma^d_{\alpha\beta 1 1} \:\: \sigma^d_{\alpha\beta 1 2}\:\: ....\:\: \sigma^d_{\alpha\beta n n}$$ where $$d$$ is the index of the data point for that line, $$\alpha$$ is the $$x$$-index of the first PDF, $$\beta$$ is the $$x$$-index of the second PDF, the $$\sigma^d_{\alpha\beta i j}$$ are the values of the FastKernel grid for data point $$d$$ as in the equation here, and $$n=14$$ is the total number of parton flavours in the grid. Therefore the full $$14\times 14$$ flavour space for one combination of the indices $$\{d,\alpha,\beta\}$$ is written out on each line. These lines should be written out first in $$\beta$$, then $$\alpha$$ and finally $$d$$ so that the FK grids are written in blocks of data points. All FK grid values should be written out in double precision. For DIS data the FK grids must be written out as $$d \:\: \alpha \:\: \sigma^d_{\alpha 1} \:\: \sigma^d_{\alpha 2}\:\: ....\:\: \sigma^d_{\alpha n}$$ Therefore here all $$n=14$$ values are written out for each combination of $$\{d,\alpha\}$$. When writing out the grids, note that only $$x$$-grid points for which there are nonzero FK entries are written out. For example, there should be no lines such as: $$d \:\: \alpha \:\: \beta \:\: 0 \:\: 0 \:\: 0 \:\: .... \:\: 0$$ However, for those $$x$$-grid points which do have nonzero $$\sigma$$ contributions, the full set of flavour contributions must be written out regardless of the number of zero entries. This choice was made in order that the nonzero flavour entries may be examined/optimised by hand after the FK table is generated. The FK file should end on the last entry in the grid, and without empty lines at the end of file. ### CFACTOR file format Additional multiplicative factors to be applied to the output of the FK convolution may be introduced by the use of CFACTOR files. These files have a very simple format. They begin with a header providing a description of the $$C$$-factor information stored in the file. This segment is initialised and terminated by a line beginning with a star (*) character and consists of six mandatory fields: • SetName - The Dataset name. • Author - The author of the CFACTOR file. • Date - The date of authorship. • CodesUsed - The code or codes used in generating the $$C$$-factors. • TheoryInput - Theory input parameters used in the $$C$$-factors (e.g $$\alpha_S$$, scales). • PDFset - The PDF set used in the $$C$$-factors. These fields are formatted as FieldName: FieldEntry and may be accompanied by any additional information, within the star delineated header region. Consider the following as a complete example of the header, *************************************** SetName: D0ZRAP Author: John Doe john.doe@cern.ch Date: 2014 CodesUsed: MCFM 15.01 TheoryInput: as 0.118, central scale 91.2 GeV PDFset: NNPDF30_as_0118_nnlo Warnings: None *************************************** The remainder of the file consists of the $$C$$-factors themselves, and the error upon the $$C$$-factors. Each line is now the $$C$$-factor for each data point, with the whitespace separated uncertainty. For example, for Dataset with five points, the data section of a CFACTOR file may be: 1.1 0.1 1.2 0.12 1.3 0.13 1.4 0.14 1.5 0.15 where the $$i^{\text{th}}$$ line corresponds to the $$C$$-factor to be applied to the FK prediction for the $$(i-1)^{\text{th}}$$ data point. The first column denotes the value of the $$C$$-factor and the second column denotes the uncertainty upon it (in absolute terms, not as a percentage or otherwise relative to the $$C$$-factor). For a complete example of a CFACTOR file, please see Example: CFACTOR file format. ### COMPOUND file format Some Datasets cover observables that depend non-linearly upon the input PDFs. For example, the NMCPD Dataset is a measurement of the ratio of deuteron to proton structure functions. In the nnpdf++ code such sets are denoted Compound Datasets. In these cases, a prescription must be given for how the results from FK convolutions, as in this equation, should be combined. The COMPOUND files are a simple method of providing this information. For each Compound Dataset a COMPOUND file is provided that contains the information on how to build the observable from constituent FK tables. The following operations are currently implemented: Operation $$(N_{\text{FK}})$$ Code Output Observable Null Operation(1) NULL $$\mathcal{O}_d = \mathcal{O}_d^{(1)}$$ Sum (2) $$\mathcal{O}_d = \mathcal{O}^{(1)}_d + \mathcal{O}^{(2)}_d$$ Sum (10) SMT $$\mathcal{O}_d = \sum_{i=1}^{10}\mathcal{O}^{(i)}_d$$ Normalised Sum (4) SMN $$\mathcal{O}_d = (\mathcal{O}^{(1)}_d + \mathcal{O}^{(2)}_d)/(\mathcal{O}^{(3)}_d + \mathcal{O}^{(4)}_d)$$ Asymmetry (2) ASY $$\mathcal{O}_d = (\mathcal{O}^{(1)}_d - \mathcal{O}^{(2)}_d)/(\mathcal{O}^{(1)}_d + \mathcal{O}^{(2)}_d)$$ Combination (20) COM $$\mathcal{O}_d = \sum_{i=1}^{10}\mathcal{O}^{(i)}_d/\sum_{i=11}^{20}\mathcal{O}^{(i)}_d$$ Ratio (2) RATIO $$\mathcal{O}_d = \mathcal{O}^{(1)}_d / \mathcal{O}^{(2)}_d$$ Here $$N_{\text{FK}}$$ refers to the number of tables required for each compound operation. $$\mathcal{O}_d$$ is final observable prediction for the $$d^{\text{th}}$$ point in the Dataset. $$\mathcal{O}_d^{(i)}$$ refers to the observable prediction for the $$d^{\text{th}}$$ point arising from the $$i^{\text{th}}$$ FK table calculation. Note that here the ordering in $$i$$ is important. The COMPOUND file layout is as so. The first line is once again a general comment line and is not used by the code, and therefore has no particular requirements other than its presence. Following this line should come a list of the FK tables required for the calculation. This must be given as the table’s filename without its path, preceded by the string ‘FK:’. For example, FK: FK_SETNAME_1.dat FK: FK_SETNAME_2.dat The ordering of the list is once again important, and must match the above table. For example, the observables $$\mathcal{O}^{(i)}$$ arise from the computation with the $$i^{\text{th}}$$ element of this list. The final line specified the operation to be performed upon the list of tables, and must take the form OP: [CODE] where the [CODE] is given in the above table. Here is an example of a complete COMPOUND file # COMPOUND FK FK: FK_NUMERATOR.dat FK: FK_DENOMINATOR.dat OP: RATIO
Question The same rocket sled drawn in Figure 4.30 is decelerated at a rate of $196 \textrm{ m/s}^2$. What force is necessary to produce this deceleration? Assume that the rockets are off. The mass of the system is 2100 kg. Question Image $4.1\times 10^{5}\textrm{ N}$ Solution Video
# Are you kidding me? Algebra Level 4 $\large \begin{cases} x^{2}-2xy+2y^{2}-20x+5y-4=0 \\ 3x+2y+1=0 \end{cases}$ Given that there exist two solution pairs $$(x_{1},y_{1})$$ and $$(x_{2},y_{2})$$ for the above system of equations, find $$x_{1}+y_{1}+x_{2}+y_{2}$$ correct to three decimal places. ×
# How do you graph of y=-(1/5)^x and state the domain and range? Feb 4, 2017 Domain $\left(- \infty , \infty\right)$. Range : $\left(- \infty , 0\right)$. See graph and explanation. #### Explanation: $y = - {\left(\frac{1}{5}\right)}^{x} < 0 \mathmr{and} \to 0$, as $x \to \infty$. y-intercept ( x = 0 ) : $- 1$ As $x \to - \infty , y \to - \infty$. So, domain is $\left(- \infty , \infty\right)$ and range is $\left(- \infty , 0\right)$ graph{-(.2)^x [-2.5, 2.5, -1.25, 1.25]}
What are the strength and direction of the electric field at the position indicated by the dot in the figure ? Part A Specify the strength of the electric field. E= ____N/C Part B Specify the direction ## Answer ### Get this answer with Chegg Study Practice with similar questions Q: The balls on the figure charged to 8.6nC. What are the strength and direction of the electric field at the position indicated by the dot in the figure Part A Specify the strength of the electric field. Part B Specify the direction in degrees. A: See answer
# Genetic Drift […] in a finite-sized population the proportion of alleles fluctuates due to stochastic sampling errors, so even in the absence of any selective pressure, the genes will eventually become fixated at one particular allele. – Thierens 19981 Genetic Drift (or just “drift”) is the phenomenon of the progressive loss of genetic information among less salient building blocks. While the algorithm is working hard to converge the salient building blocks, the less salient building blocks are also converging. But because they have lower marginal fitness contribution, they converge by chance (“drift”) alone. Studies suggest that the expected time for a gene to converge due to genetic drift alone is, in very general terms, proportional to the population size. According to the Domino Convergence model (Thierens 19981), time complexity to fully converge on the optimal solution is linear  to the encoding length l (i.e., O(l)) (only for an algorithm exhibiting constant selection intensity, read Domino Convergence). In the absence of other factors, the conclusion might be that as the encoding length increases, we must increase the population size proportionately to ensure that genetic drift does not overtake domino convergence. In practical observation, we observe that the rule generally seems to hold, but that the required adjustment is somewhat less than strictly linear. So, dilettante beware! We might expect to observe that as our encoding length increases, if our effective population size is not adjusted to compensate, we may find ourselves suffering from the effects of genetic drift. # References 1. Domino Convergence, Drift, and the Temporal-Salience Structure of Problems – Dirk Thierens, David E. Goldberg, Angela Guimaraes Pereira, 1998
# In a certain experiment, 'V' volume of acid is added to alcohol to obtain a concentration of 1 part by 400. If the surface area of the solution is A, then the thickness of a molecule of the acid is- 1. $$\sqrt{400A}$$ 2. $$\frac{400}{AV}$$ 3. $$\frac{V}{400A}$$ 4. $$\frac{400\sqrt V}{A}$$ Option 3 : $$\frac{V}{400A}$$ ## Detailed Solution The correct answer is option 3) i.e. $$\frac{V}{400A}$$ CONCEPT: • For estimating measurements as small as the size of a molecule, a commonly used method is the oleic acid experiment. • Oleic acid is a soapy liquid with large molecular size of the order of 10–9 m. • 1 cm3 of oleic acid is mixed in alcohol to make a solution of 20 cm3. 1 cm3 of this solution is further diluted to 20 cm3 of alcohol. • Thus, the concentration of the solution is equal to 1 part by 20 × 20 cm3 of solution. • Lycopodium powder is sprinkled on the surface of the water in a large trough and one drop of this solution is added in the water. • As the lycopodium powder weakens the surface tension, the oleic acid drop spreads into a thin, large and circular film of molecular thickness on water surface. • The diameter of the thin film is measured to get its area A. • If n drops each of V cm3 of oleic acid were added in the water, the volume of n drops = nV cm3 Then the amount of oleic acid in this solution =  $$nV(\frac{1}{20 \times 20}) cm^3$$ The thickness of the acid film, $$t = \frac{volume \: of \:the \: film}{area\: of\: the\: film} = \frac{nV}{20\times 20 A}\: cm$$ • This will be the size of the molecule of oleic acid. EXPLANATION: Given that: Amount of acid = V Concentration = $$\frac{1}{400}$$ Area of the acid film = A ∴ Thickness, $$t= \frac{V}{400A}$$